text
stringlengths
100
356k
# Automorphisms of Generic Abelian Varieties Automorphism groups of elliptic curves are very well understood. Of course, every elliptic curve has the automorphism $[-1]$ of order $2$. If we are over a (algebraically closed) field, this is the only non-trivial automorphism iff the $j$-invariant of our elliptic curve is neither $0$ nor $1728$. Over $\mathbb{C}$ the only other possibilities for automorphism groups are $\mathbb{Z}/4$ and $\mathbb{Z}/6$ occuring for $E = \mathbb{C}/\Lambda$ where $\Lambda$ is the square lattice or the "honeycomb" lattice respectively. In particular, the generic elliptic curve has automorphism group $\mathbb{Z}/2$, i.e. the locus of elliptic curves with bigger automorphism group in the moduli space of elliptic curves has codimension bigger than $0$. I am interested in the analogous question for abelian varieties. To force all the automorphism groups to be finite, it seems sensible to consider (principally) polarized abelian varieties. Let $\mathcal{A}_g$ be the moduli space of (principally) polarized complex abelian varieties of dimension $g$. What is the codimension of the locus of such varieties with automorphism group bigger than $\mathbb{Z}/2$ ? Likewise, one can ask the same question not about arbitrary principally polarized abelian varieties, but such equipped with an endomorphism and level structure. So, in addition to the polarization on our (complex) abelian variety $A$ we also want a ring homomorphism $\mathcal{O}_F \to End(A)$ (for a fixed, say, quadratic imaginary field $F$), compatible with the Rosatti involution, and a level structure (plus some determinant/trace-condition). For a generic such $A$, its automorphism group may contain now $\mathcal{O}_F^\times$, depending on the level structure. Let $Sh_g$ be the moduli space of such complex abelian varieties with PEL-structure (with the choices of $F$ and level implicit) of dimension $g$. Let $G$ be the smallest automorphism group of a point in $Sh_g$. What is the codimension of the locus of points of $Sh_g$ having automorphism group bigger than $G$? - The integral symplectic group $Sp_{2g}\mathbb{Z}$ acts biholomorphically on the Siegel upper half space $\mathfrak{h}_g$. So for $H$ a finite subgroup of $Sp_{2g}\mathbb{Z}$ the fixed point sets $\mathfrak{h}_g^H$ are totally geodesic contractible holomorphic subvarieties, hence all have even dimension. Explicitly we need to determine the dimension of the centralizer $Z(H)$ of $H$ in $Sp_{2g}\mathbb{R}$. For genus $g=2$, we can exhibit explicit finite subgroups whose fixed point sets are 4-dimensional subspaces of the 6-dimensional $\mathfrak{h}_2$. –  J. Martel Apr 23 '13 at 21:45 For instance, these finite subgroups (all of which are cyclic) are computed in Connelly and Kozniewski's "Finiteness properties of classifying spaces for $\Gamma$-actions", wherein they refer to a paper by K.Ueno "On fibre spaces of normally polarized abelian varieties of dimension 2", I, J. Fac. Sci. Univ. Tokyo 18 (1971) 37-95. I was unable to locate a copy of this Ueno paper. –  J. Martel Apr 23 '13 at 21:50 Moreover if $H$ is a finite subgroup of $Sp_{2g}\mathbb{R}$ acting irreducibly on $(\mathbb{R}^{2g}, \omega)$ then the corresponding fixed point set in $\mathbb{h}_g$ will be a point. We'll find that finite subgroups which are very $\mathbb{R}$-reducible yield larger dimensional fixed point sets. –  J. Martel Apr 23 '13 at 22:07 I would guess that for your first question if $g>1$ then the ppavs that are products of an elliptic curve and a ppav of dimension $g-1$ would give a maximal codimensional component of the locus of ppavs with extra automorphisms (and this is probably the unique component of this dimension if $g>2$). –  ulrich Apr 24 '13 at 5:54 @J. Martel: There is no problem with the polarisation at all; one just takes the product polarisation! –  ulrich Apr 25 '13 at 4:46
# Reachability Oracles for Directed Transmission Graphs Research paper by Haim Kaplan, Wolfgang Mulzer, Liam Roditty, Paul Seiferth Indexed on: 28 Jan '16Published on: 28 Jan '16Published in: Computer Science - Computational Geometry #### Abstract Let $P \subset \mathbb{R}^d$ be a set of $n$ points in the $d$ dimensions such that each point $p \in P$ has an associated radius $r_p > 0$. The transmission graph $G$ for $P$ is the directed graph with vertex set $P$ such that there is an edge from $p$ to $q$ if and only if $d(p, q) \leq r_p$, for any $p, q \in P$. A reachability oracle is a data structure that decides for any two vertices $p, q \in G$ whether $G$ has a path from $p$ to $q$. The quality of the oracle is measured by the space requirement $S(n)$, the query time $Q(n)$, and the preprocessing time. For transmission graphs of one-dimensional point sets, we can construct in $O(n \log n)$ time an oracle with $Q(n) = O(1)$ and $S(n) = O(n)$. For planar point sets, the ratio $\Psi$ between the largest and the smallest associated radius turns out to be an important parameter. We present three data structures whose quality depends on $\Psi$: the first works only for $\Psi < \sqrt{3}$ and achieves $Q(n) = O(1)$ with $S(n) = O(n)$ and preprocessing time $O(n\log n)$; the second data structure gives $Q(n) = O(\Psi^3 \sqrt{n})$ and $S(n) = O(\Psi^5 n^{3/2})$; the third data structure is randomized with $Q(n) = O(n^{2/3}\log^{1/3} \Psi \log^{2/3} n)$ and $S(n) = O(n^{5/3}\log^{1/3} \Psi \log^{2/3} n)$ and answers queries correctly with high probability.
Conference paper Embargoed Access # Comparison among different approaches of estimating pore pressure development in liquefiable deposits Rios, Sara; Millen, Maxim; Quintero, Julieth; Viana da Fonseca, António ### JSON-LD (schema.org) Export { "description": "<p>Estimating pore pressure development in liquefiable deposits is very important to predict liquefaction consequences at the surface namely in terms of damages to structures. Several simplified methods have been proposed from which the stress-based method from Seed et al. (1975) is the most widely used, due to its simplicity. Various attempts have been made however to develop an energy based method that could avoid the conversion to an equivalent loading, trying to quantify the liquefaction resistance in terms of a measure that reflects the true nature of seismic shear wave loading. This paper investigates the advantages and limitations of stress-based and energy based methods. For that purpose, effective and non linear dynamic numerical analysis were performed and compared with the simplified methods showing the parameters needed in each method, its assumptions and simplifications and its consequences in terms of the pore pressure prediction.</p>", "creator": [ { "affiliation": "CONSTRUCT-GEO, Faculty of Engineering of University of Porto, Porto, Portugal", "@id": "https://orcid.org/0000-0002-2625-1452", "@type": "Person", "name": "Rios, Sara" }, { "affiliation": "CONSTRUCT, Faculty of Engineering of University of Porto, Porto, Portugal", "@type": "Person", "name": "Millen, Maxim" }, { "affiliation": "CONSTRUCT-GEO, Faculty of Engineering of University of Porto, Porto, Portugal", "@type": "Person", "name": "Quintero, Julieth" }, { "affiliation": "CONSTRUCT-GEO, Faculty of Engineering of University of Porto, Porto, Portugal", "@id": "https://orcid.org/0000-0002-9896-1410", "@type": "Person", "name": "Viana da Fonseca, Ant\u00f3nio" } ], "headline": "Comparison among different approaches of estimating pore pressure development in liquefiable deposits", "datePublished": "2019-06-17", "url": "https://zenodo.org/record/3465455", "@context": "https://schema.org/", "identifier": "https://doi.org/10.5281/zenodo.3465455", "@id": "https://doi.org/10.5281/zenodo.3465455", "@type": "ScholarlyArticle", "name": "Comparison among different approaches of estimating pore pressure development in liquefiable deposits" } 2 1 views
### F Programming #### F.1 Writing your own Figaro-style commands Portable Figaro 5.1 is not intended as a programming environment. It is not recommended to write software that calls the Figaro object libraries. You should use the Starlink and ADAM facilities instead, namely the NDF data access subroutine library and the ADAM parameter system library. Also consult Section G.1 on changes between the VMS and Unix versions. The “Figaro Programmers’ Guide” of January 1991 is still largely valid, apart from section 12. However, there exists a large body of code—in a handful of released software items as well as user-written private applications—that relies on calling Figaro subroutines. For this reason the compiled object libraries are included in the release of Figaro. Let us assume that you have two private applications ‘myapp1’ and ‘myapp2’ that you want to port from VAX/VMS Figaro to Portable Figaro. As you are aware, each application consists of the Fortran source code ‘myapp1.f’ and the interface source code ‘myapp1.con’. The Fortran code needs virtually no change, but the interface must be completely re-written. You can in the first instance do an automatic conversion of the interface, but this is only possible on VAX/VMS: $! Startup for Figaro software$!  development $figaro$ figdev $! Convert .con into .par$!  Convert .par into .ifl $crepar myapp1$ par2ifl myapp1 Now take the two source files per application—‘myapp1.f’ and ‘myapp1.ifl’ to your Unix machine. Both are ASCII files. In the Fortran code, you have to change any include statements. The include files should be specified only by their file name stem, without path or name extension. And they should be specified in upper case. The most common include statement then becomes INCLUDE ’DYNAMIC_MEMORY’ The interface file ‘myapp1.ifl’ can probably be left alone. However, if the application reads or writes what used to be Figaro user variables, then these need entries in the interface file. For a read-only variable called XXXX use this entry: parameter XXXX type    ’_CHAR’ or ’_REAL’ vpath   ’GLOBAL’ ppath   ’GLOBAL’ default ’ ’ or 0. association ’<-GLOBAL.XXXX’ endparameter and for a write-only variable use parameter XXXX type    ’_CHAR’ or ’_REAL’ access  ’WRITE’ vpath   ’DEFAULT’ default ’ ’ or 0. association ’->GLOBAL.XXXX’ endparameter It is possible within one application to read through the prompt path with e.g. ‘par_rdary’ and later write into the global association with ‘var_setary’. parameter XXXX type    ’_CHAR’ or ’_REAL’ vpath   ’PROMPT’ ppath   ’GLOBAL,aaaaa,bbbbb,....’ association ’<->GLOBAL.XXXX’ prompt  ’yyyyyy’ endparameter More elaborate schemes may prove difficult. If you are not able to do an automatic conversion of the interface on a VMS machine, then you will have to edit your ‘myapp1.con’ into a suitable ‘myapp1.ifl’. Use the interface files of actual Figaro applications for guidance, and consult the documentation on these interface files. If you are planning to link your application with the original AAO Figaro DSA and DTA data access libraries then the parameters used by ‘dsa_input’, ‘dsa_output’ etc. should have type ‘_CHAR’ or ‘LITERAL’, just like any parameter obtained with ‘par_rdchar’. Alternatively, if you are planning to use the equivalent Portable Figaro FDA library, then the parameters used to access data should have type ‘NDF’. (See Appendices G.1.7 and G.1.8 for details of FDA, DSA and DTA.) It is recommended to combine all your private applications into a single monolith, in the same way that all Figaro applications are in fact in three monolithic executables. For this you need an additional Fortran source, the monolith routine. If you call your collection of applications ‘mypack’, then you would need a Fortran source ‘mypack.f’ modelled on ‘/star/figaro/figaro1.f’. You have to change the subroutine statement so that the module is called ‘mypack’ and not ‘figaro1’. And you have to change the big if-else-if block where the applications are called, so that your applications are called when their name is detected as the command. Now you can go about compiling the Fortran code. In order for the include statements to work, you need a symbolic link from the actual include file to the one used in the source code. Figaro’s public include file is in /star/figaro, and has a lower-case name. % ln -s /star/sources/figaro/dynamic_memory DYNAMIC_MEMORY % f77 -c mypack.f myapp1.f myapp2.f Next you can link your three new object files with the Figaro and Starlink libraries. Be sure that ‘mypack.o’ comes first. On Solaris, currently, you also have to add an option -lucb. Be aware that in the command the ‘quotes’ are not quotes (’), but back-quotes (‘). After linking you should be left with an executable called ‘mypack’. If you try to run this executable you will only get an error message. But then, you want to run your two applications, not the monolith as such. In order to run your applications from the Unix shell what you need are symbolic links from the monolith to each application name: % ln -s mypack myapp1 % ln -s mypack myapp2 Now you can try to run either application under its name. If the application’s interface file is in the same directory, that will even work. Otherwise you get another error message. You can use the source interface files ‘myapp1.ifl’, but it is better to compile them into binary form: % compifl myapp1 % compifl myapp2 This gives you files ‘myapp1.ifc’ etc. All you need in your executable system is in one directory: • the monolithic executable, • one symbolic link for each application from the monolith to the applications’ names, • the binary interface file ‘myapp1.ifc’ for each application. The source code (.f and .ifl) can be moved somewhere else, and the object files (.o) can be removed. You can rename the Fortran code to have file names ending in ‘.f’, as is common on Unix systems. Figaro uses ‘.for’ due to an international agreement, there is no rational reason for this. The extra bit that Figaro does when you give the ‘figaro’ command is to define an alias for each application. You can also extend your monolith so that you can run it from ICL instead of the Unix shell. For this you need to concatenate all your application interfaces: % echo ’monolith mypack’ > mypack.ifl % cat myapp[12].ifl     >> mypack.ifl % echo ’endmonolith’    >> mypack.ifl You can compile this as well with ‘compifl’ and place the interface in the same directory as the monolith. In ICL you will have to define the commands in the same way as the Figaro startup script does: ICL> define myapp1 /my/dir/mypack ICL> define myapp2 /my/dir/mypack There is little support for Figaro as a programming environment. In rare cases you may find that your applications call routines that have not been ported from VMS to Unix. The Figaro object libraries exist to serve Figaro applications, they do not pretend to be complete software items in their own right. (You may be lucky and the ‘missing’ routine has just been renamed.) That is why you are encouraged to write new software using Starlink libraries, which are designed and supported as software items independent from specific applications.
[–] 40 points41 points  (15 children) sorry, this has been archived and can no longer be voted on I don't think this actually answers your question, but I'm a big fan of nuclear flyswatter techniques. One of the most hilarious examples I've seen poses this relatively simple problem: You just won at a slot machine! The casino always empties the slot when it reaches $1,000.00. The machine only has quarters and nickels. One possible jackpot is 2 quarters and 3 nickels, one possibility is 4,000 quarters and there are many more possibilities. All possibilities totaling$1.00 or less are displayed in the figure below. How many different jackpots are there? The author then goes on to use the Atiyah-Singer index theorem to solve it. It starts on p. 20 and ends ten pages later with the ridiculous understatement It is clear that the Atiyah-Singer index theorem is not the easiest way to solve this problem. [–] 21 points22 points  (14 children) sorry, this has been archived and can no longer be voted on In my master's thesis I proved that a class of languages was not regular by showing that it contained languages at arbitrary levels of the arithmetical hierarchy. [–] 1 point2 points  (8 children) sorry, this has been archived and can no longer be voted on This is usually done with the pumping lemma, right? [–] 3 points4 points  (6 children) sorry, this has been archived and can no longer be voted on Yes, indeed, or the Myhill-Nerode theorem. As vincentrevelations pointed out, there are languages that can pump without being regular. But in fact, my work was not on regular languages, but on the less-known (and more powerful) ET0L languages -- the situation is similar to the regular languages but I didn't want to go into details. I had proved a type of pumping lemma for ET0L languages, and needed to show that it was not a sufficient condition (only necessary). So the class of languages was exactly the set of languages which satisfied the lemma, and I needed to show they were not all ET0L. Well, I thought, how far from ET0L can we go? and this lead me to a proof. When it came to writing the result I simplified a bit, taking only a single example language and showing it was uncomputable. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on That's great! I love stuff like this. [–] 0 points1 point  (4 children) sorry, this has been archived and can no longer be voted on Wait, did you show it was undecidable? [–] 0 points1 point  (3 children) sorry, this has been archived and can no longer be voted on Yeah, I prefer "computable" to "decidable" but they mean the same. [–] -1 points0 points  (2 children) sorry, this has been archived and can no longer be voted on Sure. I use "decidable" and "recognizable", and I think "decidable" makes sense because you're talking about decision machines. [–] 1 point2 points  (1 child) sorry, this has been archived and can no longer be voted on And "computable" makes sense because you're talking about decision machines :) [–] -1 points0 points  (0 children) sorry, this has been archived and can no longer be voted on Yup, but computable could also refer to recognition. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on Depends. There are languages that can pump without being regular. In that case you'll need something else to prove that they aren't regular. [–] 0 points1 point  (3 children) sorry, this has been archived and can no longer be voted on Can you explain this a little more? I'm thinking of writing a thesis as an undergrad, and I'm looking for ideas. [–] 1 point2 points  (1 child) sorry, this has been archived and can no longer be voted on I'm sure Wikipedia can explain both regular languages and the arithmetic hierarchy better than I can. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on Actually your response to mrdmnd is what I was interested in, i.e. what specifically you worked on and your techniques. Thanks! [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on See if you can find something interesting to work on from here. There are many references to some interesting problems within that book. Also, the classification of sets as "mathematical data types" is pretty amusing. [–] 29 points30 points  (1 child) sorry, this has been archived and can no longer be voted on OP, You didn't even fully answer your own question: What was the problem that you solved? [–] 24 points25 points  (5 children) sorry, this has been archived and can no longer be voted on Earlier this semester I was sitting in a Metaphysics G.E, and for some reason what was functions from one copy of a 2-disk to itself. The professor then needed to argue that some philosophers argument was wrong because there was a mapping that had a fixed point. I then pointed out that all the mappings had a fixed point. Thank you Algebraic Topology. [–] 49 points50 points  (2 children) sorry, this has been archived and can no longer be voted on I would like to hear more about proving philosophers wrong using fixed points please. [–] 10 points11 points  (1 child) sorry, this has been archived and can no longer be voted on We were going over various definitions for when an object is said to be moving. One account suggested breaking the object down into different partitions and then analyzing whether those parts of the object move. The professor brought up a counter example of a spinning disk whose center is not moving given a certain partition, which I then generalized to what I said above. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on Awesome. [–] 2 points3 points  (1 child) sorry, this has been archived and can no longer be voted on [–] 1 point2 points  (0 children) sorry, this has been archived and can no longer be voted on [–] 25 points26 points  (0 children) sorry, this has been archived and can no longer be voted on In a physics class of mine, we were learning about applying differential equations. I was struggling to grasp it all. On one of our projects there was a question about a river that contained two trees right on the banks - the line between them perpendicular to the river. A man started at one tree and kayaked across the river (at the same constant rate that the river flowed) to the other side. The caveat was that he constantly paddled in the direction of the tree, so it wasn't linear. Where did he land on the other bank? After struggling with it for a second, it occured to me that I could just apply my knowledge of conics, as the tree was the focus of a parabola. That meant that the man would land at the latus rectum which was a really obscure thing for me to have remembered. Although intended to be a full-page write up, I managed to knock it out in three lines. [–] 22 points23 points  (0 children) sorry, this has been archived and can no longer be voted on I was cramming for a cryptography exam on the flight to Seattle for a Microsoft interview, and managed to apply a lattice reduction algorithm to a search query interview question. Got me a job! [–] 36 points37 points  (4 children) sorry, this has been archived and can no longer be voted on For my first postdoc, I was working on controlling an instrument on a remote spacecraft (SOHO). I had to learn a stripped-down language that the team had written to control uploads to the spacecraft. The first thing they asked me to do was write a codewalker that would estimate the amount of time a given upload would take. I demonstrated that the language was Turing-equivalent and therefore the problem was impossible in the general case due to Godel's Incompleteness Theorem (they had asked me to solve the Halting Problem). It turns out, though, that you can solve it in all the cases you care about. Classic engineering dilemma. [–] 13 points14 points  (1 child) sorry, this has been archived and can no longer be voted on I used to have a job as a compiler-tools writer. We had to solve halting problems every day. E.g. "warning: unreachable code" [–] 4 points5 points  (0 children) sorry, this has been archived and can no longer be voted on This is a key point that I think the software engineering world is getting better at. When something is impossible in the general case, the solution is to make the tooling good enough that it is easy to deal with in the specific case. [–] 3 points4 points  (1 child) sorry, this has been archived and can no longer be voted on Every time somebody brings up the Halting problem in an argument, which happens waaay to often, I ask them for an example that is undecidable. Most of them can barely remember any example. The only example I know about (but can't fully remember) is the one with the contradiction machine, which was incredibly contrived and unrealistic. I still hate my professor for not stressing that the Halting problem is mostly a theoretical tool. [–] 2 points3 points  (0 children) sorry, this has been archived and can no longer be voted on The contradiction machine? You mean, the language of machines that do not accept when given themselves as input? If the language were decided by a machine M, then either M(M) returns a no, in which case M does not accept itself and M should return yes, or M(M) returns yes, in which case M halts and accepts on itself as input, and M(M) should return no. This is exactly what we have to work with, the language of machines which do NOT halt, because the language of machines which DO is recognizable. In fact, this machine is exactly the same as the diagonal argument for the halting problem being undecidable. [–] 41 points42 points  (4 children) sorry, this has been archived and can no longer be voted on I haven't seen the relevant episode, but the writers for Futurama ended up having to prove a new group theory result to demonstrate that the protagonists' scheme to deal with some weird extradimensional shit would've worked. [–] 24 points25 points  (3 children) sorry, this has been archived and can no longer be voted on [–] 11 points12 points  (1 child) sorry, this has been archived and can no longer be voted on Ha! I literally just watched this episode over the weekend! The proof is up on the glowboard onscreen for like half a second, but I paused it and was able to follow the proof fairly well, but I had no idea the writers had come up with this theorem. More people need to know about stuff like this. Also, I'm gonna keep this bookmarked for any time anyone talks about how smart Big Bang Theory is. EDIT: Also, my favorite part is that, in the show, this theorem is postulated not by Farnsworth but by the Harlem Globetrotters. [–] 3 points4 points  (0 children) sorry, this has been archived and can no longer be voted on It's great because we have genuinely well-written - and tightly-written, I might add - shows like this that are both a lot of fun and provoke a great deal of thought - that then get cancelled to make room for Family Guy, The Cleveland Show, Jersey Shore (actually, virtually any reality show), and so on. [–] 4 points5 points  (0 children) sorry, this has been archived and can no longer be voted on It would be really good if Futurama found a way to reference the Futumara projections (from compiler theory) [–] 29 points30 points  (7 children) sorry, this has been archived and can no longer be voted on This is small potatoes, but I liked it. I was playing pub trivia, and our team tied for first. So the host asked for a representative from each team to come up for a tie breaker. I went up, and he asked each of to pick a number between 1 and "oh, 13,000" and whoever came closest would win. I immediately recognized this as the cake cutting problem, and so "graciously" let my opponent choose first. They said something like 344, and so I said 345 and claimed the lion's share of the possible numbers. Well, it was good math, but not good psychology. The correct answer was in the 200s, and I lost. :( [–] 15 points16 points  (6 children) sorry, this has been archived and can no longer be voted on Who in their right mind wouldn't have picked 6500 if they went first. [–] 18 points19 points  (2 children) sorry, this has been archived and can no longer be voted on Who in their right mind would have the participants communicate the numbers at all. [–] 13 points14 points  (1 child) sorry, this has been archived and can no longer be voted on At all? ;) [–] 10 points11 points  (0 children) sorry, this has been archived and can no longer be voted on Erm, well. [–] 4 points5 points  (0 children) sorry, this has been archived and can no longer be voted on probably 99%+ of the population [–] 2 points3 points  (1 child) sorry, this has been archived and can no longer be voted on The way it was phrased, it wasn't clear that the goal was to pick the number closest to the secret number that the host was thinking. [–] 1 point2 points  (0 children) sorry, this has been archived and can no longer be voted on You're right, thanks. I've fixed the wording... [–] 25 points26 points  (0 children) sorry, this has been archived and can no longer be voted on I once had to do some cooking with my mother, and we had to cut out a third of some spherical vegetable. I used revolution of objects around the x-axis to find out where to cut it so the cross-section at the cut would have the least area, so the vegetable would spoil the least. [–] 9 points10 points  (5 children) sorry, this has been archived and can no longer be voted on I wound up coming up with a divisibility rule for 10x -1, with x in Z to settle a debate about whether or not the \$.99 store we were at near the US-Canada border was duty-free [–] 5 points6 points  (4 children) sorry, this has been archived and can no longer be voted on Just to understand it better: isn't 10x -1 always divisible by 9? [–] 3 points4 points  (2 children) sorry, this has been archived and can no longer be voted on Yes, since x is an integer we may rewrite 10x - 1 = 10x - 1x = (10-1)×(10x-1 + 10x-2 ×1 + ... + 10×1x-2 +1x-1 ) Clearly the left term in the product is divisible by 9, so that 10x -1 is also divisible by 9. [–] 9 points10 points  (1 child) sorry, this has been archived and can no longer be voted on Yeah... also.... 999...999 = 111...111 x 9 [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on ... or 10x - 1 mod 9 = 1-1 = 0 [–] 1 point2 points  (0 children) sorry, this has been archived and can no longer be voted on Yup, but the converse is not true. the rule I came up with is similar, but instead of adding individual digits, you group the numbers based on how many digits the number you're testing for divisibility by happens to have, so for ninety nine, you group then in pairs edits for weird cell phone things [–][deleted]  (3 children) sorry, this has been archived and can no longer be voted on [deleted] [–] 1 point2 points  (1 child) sorry, this has been archived and can no longer be voted on Not just from 1996 -- it's been known for about a century that iteration of entire transcendental functions produces interesting sets. I suppose that not much has come of it since, but it would be interesting if the additional information present in such a map could be used to nail things down more. It would seem that Erdos was right, for now -- "Mathematics is not yet ripe for such problems." [–] 9 points10 points  (1 child) sorry, this has been archived and can no longer be voted on Nice try, Numb3rs writers. [–] 2 points3 points  (0 children) sorry, this has been archived and can no longer be voted on That show ended three years ago, dude. [–]Algebraic Geometry 3 points4 points  (2 children) sorry, this has been archived and can no longer be voted on Using group theory to compute all possible rotations of a cube in 3 space. [–] 1 point2 points  (1 child) sorry, this has been archived and can no longer be voted on Now do it for a rubik's cube! [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on And list all normal subgroups! [–] 6 points7 points  (0 children) sorry, this has been archived and can no longer be voted on I'm an engineer in R&D, and spend a good fraction of my time doing math modeling. This happens to me pretty often. That said, my math knowledge tops out at DFEQs and vector Calc. Usually my slick math trick is recognizing that some integral is some transform I already know and doing it in my head. That or a change of variables and slick application of some multidimensional chain rule. [–] 2 points3 points  (0 children) sorry, this has been archived and can no longer be voted on I used a power diagram to speed up a path-finding algorithm. [–] 3 points4 points  (0 children) sorry, this has been archived and can no longer be voted on Last semester, I was drinking with some friends and playing BS (the card game). I was supposed to play 7s and I felt like an idiot because I had just lied and played my 7 as a 2. I then realized that I could quickly calculate a plan minimize the number of lies I tell as a matter of modulo arithmetic. Pretty obvious in hindsight, but planning out your entire strategy in the opening minute feels really good. [–] 2 points3 points  (0 children) sorry, this has been archived and can no longer be voted on My math TA last quarter in calc 3 used la grange multipliers to determine the optimal number of brews for tea leaves/ how much would be left in the leaves after each subsequent brew. Unfortunately I didn't take notes that day or I'd share his methods. [–] 2 points3 points  (0 children) sorry, this has been archived and can no longer be voted on I once saw a discussion of ways to catch a lion. I can only remember two. The first was rather resource intensive - you build a fence dividing the world in two. Identify which half the lion is in, and build a fence separating that half in two. Repeat until you have placed the lion in an area small enough to qualify as caught. Of course, you still need to prove that there exists a lion. The other solution, however, was definitely both easiest and my favourite. Get a cage. Stand in the cage, and invert with respect to the cage. [–] 13 points14 points  (0 children) sorry, this has been archived and can no longer be voted on [–] 1 point2 points  (2 children) sorry, this has been archived and can no longer be voted on This is FAR less impressive and obscure than many of the other things in this thread, but I'm still a little proud of it. A while back, I was making a Google Doc-based character sheet for New World of Darkness Changeling that would auto-calculate your XP expenditures, based on what you wanted your stats to be. The problem was that the XP rules were very weird. If you had 2 dots in an attribute, the cost to get the third was something like 15 (5*number of new dot). Getting a 4th dot in a skill would be 12 (3*number of new dot). I stared at this in bafflement for a while before realizing that I was looking at a version of triangular numbers. One quick Google Search found me the equation for triangular numbers, and some algebra let me figure out how to get from "starting dots" to "total dots" (e.g. the XP cost to go from 2 dots to 4 dots in a skill), and voila! Edit: Fixed formatting [–] 1 point2 points  (1 child) sorry, this has been archived and can no longer be voted on You might want to escape your *s; they're causing reddit to italicize your text. Like this: \* Also, I can relate. I have a tendency to randomly remember "Oh, this sequence is important". I usually just use OEIS, though. I just had a not-so-shining moment with that technique earlier today: I came across a sequence that didn't seem very predictable, so I decided to search for it. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on Thanks for the tip! That's what I get for not proof-reading after I submit. This is the first I've heard of OEIS. Nice! Thanks for the note, as well! [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on Some abstract concepts that I've found are generally useful IRL are: • compactness • convexity • computability • polynomial time reduction • isomorphism • fixed points • positive/null recurrence [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on I'm frequently sorting ~35 papers at a time for my class, and I wonder about the algorithm I use ad-hoc by using all the spacing between my fingers.
# Customize options of Taylor polynomials I am doing the Taylor series of sin function by using this code: \documentclass{article} \usepackage{pgfplots} \pgfplotsset{compat=1.16} \begin{document} \begin{tikzpicture} \begin{axis}[domain=-3.14:3.14,samples=100,smooth,no markers,axis lines=middle,ymax=2,ymin=-2,xlabel=$x$,ylabel=$y$] \def\myfun{0} \pgfplotsforeachungrouped \nn in {0,1,2,3,4} {\edef\myfun{\myfun+((-1)^(\nn))*pow(x,2*\nn+1)/factorial(2*\nn+1)} } \end{axis} \end{tikzpicture} \end{document} I would like to customise the options of colors in each new term inserted in the series (tones of blue), specify the width of line, etc.... Moreover, I would like to insert a legend each time a new parcell is inserted. How can I do this, please? As explained in the section Defining Cycle Lists based on Color Maps that starts on p. 220 of the pgfplots manual v1.17, you can create a cycle list based on a colormap. I added something that interpolates between blue and cyan, but you can change this, of course. And you can add the legend entries with \addlegendentryexpanded. \documentclass[tikz,border=3mm]{standalone} \usepackage{pgfplots} \pgfplotsset{compat=1.16} \begin{document} \begin{tikzpicture} \begin{axis}[width=10cm,domain=-pi:pi,samples=101,smooth, no markers,axis lines=middle, legend style={at={(0.75,0.4)},anchor=north}, ymax=2,ymin=-2,xlabel=$x$,ylabel=$y$, colormap={blueblack}{color=(blue) color=(cyan)}, cycle multiindex* list={[samples of colormap=6]\nextlist mark list\nextlist }] \addlegendentry{$\sin x$} \edef\myfun{0} \pgfplotsforeachungrouped \nn in {0,1,2,3,4} {\edef\myfun{\myfun+((-1)^(\nn))*pow(x,2*\nn+1)/factorial(2*\nn+1)} \addlegendentryexpanded{order $\the\numexpr2*\nn+1$} • @BambOo This or maybe the sum function from here. – user194703 May 4 at 16:38 • @BambOo I think that the idea of accumulating macros/stuff in this way is rather old. There are very nice answers by Jake who used some pgfplotstable functions to build up such sums. (And I had upvoted your question some time ago, but at that point I did not understand how powerful \pgfmathdeclarefunction really is. These functions can be used everywhere where TikZ parses expressions, also in coordinates and so on.) – user194703 May 4 at 16:44
# Understanding Op-Amp gain bandwidth product Below is the schmatics and AC transfer function for transimpedance amplifier from this design. According to the datasheet, OpAmp has the gain bandwidth product of 20MHz. But, here the gain is 94.58 dB = 54954 V/V and the bandwidth is 1.464 MHz. So, Gain * BW = 54954 * 1.464 MHz = 80452 MHz, which is a lot than the value specified in datasheet. Can anyone help me understand how such high GBP is possible here? The TransImpedanceAmplifier (TIA) has a gain (volts out to current in) equal to the feedback resistor. It is 53.6 kohm and take the log of this and multiply by 20 and you get 94.58 dB. Gain bandwidth product is all about voltage gain - you don't have any voltage gain in a theoretical TIA circuit unless you are going to perform noise analysis. If you were to analyse noise you'd realize that without the feedback capacitor (2p7) the amplifier's input's "self noise" is significantly amplified at higher frequencies due to the parasitic capacitance of the photodiode. It basically forms a gain stage from the equivalent input noise in series with the non-inverting input. The 2p7 seeks to reduce this effect by progressively shunting the 53k6 as frequencies rise. • The gain in the PDF is in V/A indeed. That's not quite "dB" as we have it in the OP's graph. – Fizz Oct 30 '15 at 17:39 • @RespawnedFluff 20log$_{10}$(53600) = 94.58. Not sure what you mean? Oct 30 '15 at 17:41 • "dB" usually refers to unit-less units; they note it as "Gain (V/A, dB)". Also, the PDF does get to opamp GBW calculations on page 5. That mostly depends on the capacitors added (as I suspected). – Fizz Oct 30 '15 at 17:44 • @RespawnedFluff I'm trying to figure out if you are saying my answer is somehow incorrect. Maybe you think I should add something? Oct 30 '15 at 17:47 You'll have to work through their calculations, but it's mainly the caps that determine the required GBW of the opamp, in order to ensure stability. Intuitively, the caps short out the higher frequencies, in particular C1 shorts out R1, so the [opamp] feedback/gain diminishes at higher frequencies. They came up with 9.653 Mhz GBW needed for the opamp, which the OPA320 satisfies. You have confused that with the gain of the TIA. See pages 5-8 in the appnote for their calculation of the opamp GBW (and opamp selection):
# Does a photon experience time or space? 1. Jul 20, 2009 ### dshea 2. Jul 20, 2009 ### nnnm4 That's kind of an unanswerable question. How can something without complexity (in a certain sense) "experience" anything. A brain is required. 3. Jul 20, 2009 ### negitron Well, in this case "experience" is simply a colloquialism (and a common one in popular science journalism) meaning "affected by". That being the case, it's pretty well agreed by mainstream physicists that photons do not, in fact, "experience" time; that is to say, they do not age. 4. Jul 20, 2009 ### dshea Your answer seems philosophical and if you want to go that way, does a pencil not experience time? How about bacteria, a bug, a fish, a cow,..., or even a human being. I'm sort of interested in the philosophical side of this discussion, however I was wondering based on current scientific theories (insert above question) 5. Jul 20, 2009 ### dshea so can a photon have spin? 6. Jul 20, 2009 ### negitron According to the Standard Model, photons are particles with spin 1. So, yes. 7. Jul 20, 2009 ### Math Is Hard Staff Emeritus A pencil is affected by time, but does not experience it. Things with sensory systems and brain structures to process incoming sensory data may have the experience of events occurring in sequence and may be able to make use of it. Most humans have a time experience; bacteria, eh.. probably not. For critters in between it gets more interesting and a bit more testable. 8. Jul 20, 2009 ### dshea Thank you so far for your answers, I am loving this forum. As an observer Is the spin part of the photons energy? How much energy is in a photon? 9. Jul 20, 2009 ### negitron Do I really need to link to a definition of colloquialism? 10. Jul 20, 2009 ### negitron No, it's an intrinsic property of all photons. A photon's energy is given by E = hf, where E is the energy, h (actually h-bar, but I don't know how to type that) is the Planck constant and f is the frequency. 11. Jul 20, 2009 ### maverick_starstrider This question seems a lot more complex to me. Applying Robertson's Equality (essentially Heisenberg's uncertainty) $\Delta E \Delta t \geq \frac{\hbar}{2}$ which implies a potential energy fluctuation in time. 12. Jul 20, 2009 ### dshea If a photon leaves it's source and it is not affected by time and space at which point I absorb the photon, does it therefore imply that the source had interacted with me. To further develop my question so that you understand what I am thinking. The source is in a way passing it's energy directly to me. In a way I am interacting with something that existed possibly millions of years ago. 13. Jul 21, 2009 ### maverick_starstrider I wish I could provide you with an intuitive easy answer but unfortunately quantum doesn't work that way. I'd imagine you're more or less visualizing a photon being ejected from a scattering event like someone flicking a pee with their finger or some such. However, the photon is a spin 1 boson with a quantum wavefunction propogating through space. Depending on what you're trying to do to the photon or how you're trying to measure it the convenience of a "timeless particle" perspective vs. "a continuously fluctuating entity popping in and out of existance along a signal trajectory" perspective is impossible to distinguish. It propogates like a spin 1 boson. The application of terms like "timelessness" and "experience" is really dubious. In other words we can exactly model its behaviour but conveying that to such vague english terms is always going to be problematic. ESPECIALLY a question like "how OLD is it" 14. Jul 21, 2009 ### apeiron Google cramer and transactional interpretation of QM if you want some sort of thinking along these lines. 15. Jul 21, 2009 ### sas3 As I understand it since a photon has no rest mass it must travel at the speed of light and anything traveling at the speed of light does not move through time (i.e. time stops). So for a photon space/time does not exist. From a photons point of view it never gets to go anywhere, or does it get to go anywhere in no time, Is it a partial is it a wave, or both at the same time, Dam that’s right time doesn’t exist. OUCH ! 16. Jul 21, 2009 ### dshea maverick I wish you could provide me with an intuitive easy answer. On the other hand I have avoided understanding things based on their difficulty for far to long and am pulling up my sleeves and going to work. Gotta start somewhere right! Now what do you mean a spin 1 boson (I know what it means to propagate). Also because of the law of contraction, if light is a particle would it be two dimensional (our perspective). Is it possible for us to view/measure a single photon? P.S. I will be asking a lot of questions as I have so many of them. 17. Jul 21, 2009 ### ibcnunabit If you interpret the question as whether the change of position or time elapses from the photon's perspective, no, it doesn't. This has interesting consequences, and I believe is one of the prime reasons for many of the peculiarities of light as seen from our perspective in which distance and time DO elapse in our observations of light. (Double slit experiment, etc.) 18. Jul 21, 2009 ### maverick_starstrider The double slit experiment is the result of wave-particle duality. You can perform the double slit experiment with electrons onto phosphorous paper and get the same result. There's nothing special about light. 19. Jul 21, 2009 ### maverick_starstrider A boson is a particle which: -has an integer spin -the spatial component of the total wavefunction of a two/multi boson system is symmetric -they do no follow the pauli exclusion principle (which applied to fermions) These 3 things are actually all the exact same property stated in different ways. All the messenger particles are bosons (photons, gauge boson's, etc.) as is, for example a helium atom. I do not understand what you mean about light being 2-dimensional. All particles, including photons, have a quantum wavefunction which is essentially a probability amplitude which inherently obeys Heisenberg's Uncertainty principle (which is actually a specific case of Robertson's Inequality). Therefore, the notion of the EXTENT of a particle is inherently fuzzy just as much as its position is. And yes, you can detect an individual (real) photon. This was the essence of the double slit experiment. The fact that one observes an interference pattern even though only a single photon is going through the apparatus at a given time http://en.wikipedia.org/wiki/Double_slit_experiment 20. Jul 21, 2009 ### dshea I must say maverick your answers are clear, and you have been very helpful to my understanding thus far. As for light being 2-dimensional I may have been jumping to a conclusion that may have been false. My conclusion came from an observers point of view. If there was a spaceship that was traveling from left to right, as it approached the speed of light I would notice its contraction the closer it came to c. My conclusion was that since light traveled at c that as an observer it would have contracted into a 2 dimensional something. I think of black holes in the same way, all the particles are traveling all directions towards the center at increasing speed reaching c giving a black hole its singularity structure from the point of an observer. If I'm way off track can you set me right?
Your browser does not support JavaScript! http://iet.metastore.ingenta.com 1887 ## Direction of Satcom R&D in Japan: WINDS, ETS-IX, and beyond • Author(s): • DOI: $16.00 (plus tax if applicable) ##### Buy Knowledge Pack 10 chapters for$120.00 (plus taxes if applicable) IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied. Recommend Title Publication to library You must fill out fields marked with: * Librarian details Name:* Email:* Name:* Email:* Department:* Why are you recommending this title? Select reason: Advances in Communications Satellite Systems: Proceedings of the 37th International Communications Satellite Systems Conference (ICSSC-2019) — Recommend this title to your library ## Thank you Advance satellite communications technology has been achieved through Wideband InterNetworking engineering test Satellite (WINDS) and Engineering Test Satellite 9 (ETS-IX) projects in these two decades in Japan. WINDS achieved Gbps-class transmission capability, wide bandwidth active phased array antenna in Ka-band, and high speed onboard switching technologies. ETS-IX is currently being developed to demonstrate flexible resource management capability with wide bandwidth digital channelizer and digital beam former in Ka-band. In this chapter, very high capacity optical communications technology being developed for feeder link applications of a very high throughput satellite is reviewed. Satellite communications are thought to be beneficial for 5G coverage expansion to the ocean surface, air and space, and some trials to integrate to 5G. ETS-IX will be used as an experimental platform for such technical trials based on European Space Agency-National Institute of Information and Communications Technology (ESA-NICT) collaborations. Based on these achievements and experiences, and taking into account the recent trend of satellite communications technology for VHTS, LEO constellations, HAPS, and so on, we have studied next-generation satellite communications technology. Chapter Contents: • 31.1 Introduction • 31.2 WINDS • 31.3 ETS-IX • 31.4 Direction of Satcom R&D • 31.4.1 Satellite communications in beyond 5G networks • 31.4.2 Fundamental technology development for satellite networks in the future • 31.4.2.1 Digital transponders • 31.4.2.2 Optical space communications • 31.5 Conclusion • References Preview this chapter: Direction of Satcom R&D in Japan: WINDS, ETS-IX, and beyond, Page 1 of 2 | /docserver/preview/fulltext/books/te/pbte095e/PBTE095E_ch31-1.gif /docserver/preview/fulltext/books/te/pbte095e/PBTE095E_ch31-2.gif ### Related content content/books/10.1049/pbte095e_ch31 pub_keyword,iet_inspecKeyword,pub_concept 6 6 This is a required field
## Animated Logical Graphs • 18 Last time we contemplated the penultimately simple algebraic expression ${}^{\backprime\backprime} \texttt{(} a \texttt{)} {}^{\prime\prime}$ as a name for a set of arithmetic expressions, namely, $\texttt{(} a \texttt{)} = \{ \,\texttt{()}\, , \,\texttt{(())}\, \},$ taking the equality sign in the appropriate sense. Then we asked the corresponding question about the operator ${}^{\backprime\backprime} \texttt{(} ~ \texttt{)} {}^{\prime\prime}.$  The above selection of arithmetic expressions is what it means to contemplate the absence or presence of the arithmetic constant ${}^{\backprime\backprime} \texttt{(} ~ \texttt{)} {}^{\prime\prime}$ in the place of the operand ${}^{\backprime\backprime} a {}^{\prime\prime}$ in the algebraic expression ${}^{\backprime\backprime} \texttt{(} a \texttt{)} {}^{\prime\prime}.$  But what would it mean to contemplate the absence or presence of the operator ${}^{\backprime\backprime} \texttt{(} ~ \texttt{)} {}^{\prime\prime}$ in the algebraic expression ${}^{\backprime\backprime} \texttt{(} a \texttt{)} {}^{\prime\prime}?$ Evidently, a variation between the absence and the presence of the operator ${}^{\backprime\backprime} \texttt{(} ~ \texttt{)} {}^{\prime\prime}$ in the algebraic expression ${}^{\backprime\backprime} \texttt{(} a \texttt{)} {}^{\prime\prime}$ refers to a variation between the algebraic expression ${}^{\backprime\backprime} a {}^{\prime\prime}$ and the algebraic expression ${}^{\backprime\backprime} \texttt{(} a \texttt{)} {}^{\prime\prime},$ somewhat as pictured below. But how shall we signify such variations in a coherent calculus? cc: Cybernetics (1) (2) • Peirce (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) cc: Ontolog Forum (1) (2) • Structural Modeling (1) (2) • Systems Science (1) (2) cc: FB | Logical GraphsLaws of Form ### 7 Responses to Animated Logical Graphs • 18 This site uses Akismet to reduce spam. Learn how your comment data is processed.
Question # Assertion :While separating a mixture of ortho- and paranitrophenols by steam distillation, name the isomer which will be steam volatile. Reason: o-nitrophenol is steam volatile due to chelation (intramolecular H-bonding) and hence can be separated by stream distillation from p-nitrophenol, which is not steam volatile because of intermolecular H-bonding. A Both Assertion and Reason are correct and Reason is the correct explanation for Assertion B Both Assertion and Reason are correct but Reason is not the correct explanation for Assertion C Assertion is correct but Reason is incorrect D Both Assertion and Reason are incorrect Solution ## The correct option is A Both Assertion and Reason are correct and Reason is the correct explanation for AssertionIntramolecular $$H-$$bonding is present in o-nitrophenol. In p-nitrophenol, the molecules are strongly associated due to the presence of intermolecular bonding  Hence, o-nitrophenol is steam volatile Chemistry Suggest Corrections 0 Similar questions View More People also searched for View More
# The VLA-COSMOS 3~GHz Large Project: Cosmic evolution of radio AGN and implications for radio-mode feedback since z~5 [GA] Based on a sample of over 1,800 radio AGN at redshifts out to z~5, which have typical stellar masses within ~3x(10^{10}-10^{11}) Msol, and 3 GHz radio data in the COSMOS field, we derived the 1.4 GHz radio luminosity functions for radio AGN (L_1.4GHz ~ 10^{22}-10^{27} W/Hz) out to z~5. We constrained the evolution of this population via continuous models of pure density and pure luminosity evolutions, and we found best-fit parametrizations of Phi~(1+z)^{(2.00+/-0.18)-(0.60+/-0.14)z}, and L~(1+z)^{(2.88+/-0.82)-(0.84+/-0.34)z}, respectively, with a turnover in number and luminosity densities of the population at z~1.5. We converted 1.4 GHz luminosity to kinetic luminosity taking uncertainties of the scaling relation used into account. We thereby derived the cosmic evolution of the kinetic luminosity density provided by the AGN and compared this luminosity density to the radio-mode AGN feedback assumed in the Semi-Analytic Galaxy Evolution (SAGE) model, i.e., to the redshift evolution of the central supermassive black hole accretion luminosity taken in the model as the source of heating that offsets the energy losses of the cooling, hot halo gas, and thereby limits further stellar mass growth of massive galaxies. We find that the kinetic luminosity exerted by our radio AGN may be high enough to balance the radiative cooling of the hot gas at each cosmic epoch since z~5. However, although our findings support the idea of radio-mode AGN feedback as a cosmologically relevant process in massive galaxy formation, many simplifications in both the observational and semi-analytic approaches still remain and need to be resolved before robust conclusions can be reached. V. Smolcic, M. Novak, I. Delvecchio, et. al. Mon, 22 May 17 3/51 Comments: 13 pages, 9 figures, 2 tables, to appear in A&A # Gemini NIFS survey of feeding and feedback processes in nearby Active Galaxies: I – Stellar kinematics [GA] We use the Gemini Near-Infrared Integral Field Spectrograph (NIFS) to map the stellar kinematics of the inner few hundred parsecs of a sample of 16 nearby Seyfert galaxies, at a spatial resolution of tens of parsecs and spectral resolution of 40 km/s. We find that the line-of-sight (LOS) velocity fields for most galaxies are well reproduced by rotating disk models. The kinematic position angle (PA) derived for the LOS velocity field is consistent with the large scale photometric PA. The residual velocities are correlated with the hard X-ray luminosity, suggesting that more luminous AGN have a larger impact in the surrounding stellar dynamics. The central velocity dispersion values are usually higher than the rotation velocity amplitude, what we attribute to the strong contribution of bulge kinematics in these inner regions. For 50% of the galaxies, we find an inverse correlation between the velocities and the $h_3$ Gauss-Hermitte moment, implying red wings in the blueshifted side and blue wings in the redshifted side of the velocity field, attributed to the movement of the bulge stars lagging the rotation. Two of the 16 galaxies (NGC 5899 and Mrk 1066) show an S-shape zero velocity line, attributed to the gravitational potential of a nuclear bar. Velocity dispersion maps show rings of low-$\sigma$ values (50-80 km/s) for 4 objects and “patches” of low-sigma for 6 galaxies at 150-250 pc from the nucleus, attributed to young/ intermediate age stellar populations. R. Riffel, T. Storchi-Bergmann, R. Riffel, et. al. Mon, 22 May 17 14/51 Comments: To be published in MNRAS # On the existence of young embedded clusters at high Galactic latitude [GA] Careful analyses of photometric and star count data available for the nine putative young clusters identified by Camargo et al. (2015, 2016) at high Galactic latitudes reveal that none of the groups contain early-type stars, and most are not significant density enhancements above field level. 2MASS colours for stars in the groups match those of unreddened late-type dwarfs and giants, as expected for contamination by (mostly) thin disk objects. A simulation of one such field using only typical high latitude foreground stars yields a colour-magnitude diagram that is very similar to those constructed by Camargo et al. (2015, 2016) as evidence for their young groups as well as the means of deriving their reddenings and distances. Although some of the fields are coincident with clusters of galaxies, one must conclude that there is no evidence that the putative clusters are extremely young stellar groups. D. Turner, G. Carraro and E. Panko Mon, 22 May 17 18/51 # VLA Survey of Dense Gas in Extended Green Objects: Prevalence of 25 GHz Methanol Masers [GA] We present $\sim1-4″$ resolution Very Large Array (VLA) observations of four CH$_3$OH $J_2-J_1$-$E$ 25~GHz transitions ($J$=3, 5, 8, 10) along with 1.3~cm continuum toward 20 regions of active massive star formation containing Extended Green Objects (EGOs), 14 of which we have previously studied with the VLA in the Class~I 44~GHz and Class~II 6.7~GHz maser lines (Cyganowski et al. 2009). Sixteen regions are detected in at least one 25~GHz line ($J$=5), with 13 of 16 exhibiting maser emission. In total, we report 34 new sites of CH$_3$OH maser emission and ten new sites of thermal CH$_3$OH emission, significantly increasing the number of 25~GHz Class I CH$_3$OH masers observed at high angular resolution. We identify probable or likely maser counterparts at 44~GHz for all 15 of the 25~GHz masers for which we have complementary data, providing further evidence that these masers trace similar physical conditions despite uncorrelated flux densities. The sites of thermal and maser emission of CH$_3$OH are both predominantly associated with the 4.5 $\mu$m emission from the EGO, and the presence of thermal CH$_3$OH emission is accompanied by 1.3~cm continuum emission in 9 out of 10 cases. Of the 19 regions that exhibit 1.3~cm continuum emission, it is associated with the EGO in 16 cases (out of a total of 20 sites), 13 of which are new detections at 1.3~cm. Twelve of the 1.3~cm continuum sources are associated with 6.7~GHz maser emission and likely trace deeply-embedded massive protostars. A. Towner, C. Brogan, T. Hunter, et. al. Mon, 22 May 17 21/51 # Intrinsic AGN SED & black hole growth in the Palomar–Green quasars [GA] We present a new analysis of the PG quasar sample based on Spitzer and Herschel observations. (I) Assuming PAH-based star formation luminosities (L_SF) similar to Symeonidis et al. (2016, S16), we find mean and median intrinsic AGN spectral energy distributions (SEDs). These, in the FIR, appear hotter and significantly less luminous than the S16 mean intrinsic AGN SED. The differences are mostly due to our normalization of the individual SEDs, that properly accounts for a small number of very FIR-luminous quasars. Our median, PAH-based SED represents ~ 6% increase on the 1 — 250{\mu}m luminosity of the extended Mor & Netzer (2012, EM12) torus SED, cf. ~ 20% found by S16. It requires large-scale dust with T ~ 20 — 30 K which, if optically thin and heated by the AGN, would be outside the host galaxy. (II) We also explore the black hole and stellar mass growths, using L_SF estimates from fitting Herschel/PACS observations after subtracting the EM12 torus contribution. We use rough estimates of stellar mass, based on scaling relations, to divide our sample into groups: on, below and above the star formation main sequence (SFMS). Objects on the SFMS show a strong correlation between star formation luminosity and AGN bolometric luminosity, with a logarithmic slope of ~ 0.7. Finally we derive the relative duty cycles of this and another sample of very luminous AGN at z = 2 — 3.5. Large differences in this quantity indicate different evolutionary pathways for these two populations characterised by significantly different black hole masses. C. Lani, H. Netzer and D. Lutz Mon, 22 May 17 36/51 # The formation and coalescence sites of the first gravitational wave events [GA] We present a novel theoretical model to characterize the formation and coalescence sites of compact binaries in a cosmological context. This is based on the coupling between the binary population synthesis code SeBa with a simulation following the formation of a Milky Way-like halo in a well resolved cosmic volume of 4 cMpc, performed with the GAMESH pipeline. We have applied this technique to investigate when and where systems with properties similar to the recently observed LIGO/VIRGO events are more likely to form and where they are more likely to reside when they coalescence. We find that more than 70% of GW151226 and LVT151012 candidates form in galaxies with stellar mass M_{star} > 10^8 Msun in the redshift range [0.06 – 3] and [0.14 – 11.3], respectively. All GW150914 candidates form in low-metallicity dwarfs with M_{star} < 5 \times 10^6 Msun at 2.4 < z < 4.2. Despite these initial differences, by the time they reach coalescence the observed events are most likely hosted by star forming galaxies with M_{star} > 10^{10} Msun. Due to tidal stripping and radiative feedback, a non negligible fraction of GW150914 candidates end-up in galaxies with properties similar to dwarf spheroidals and ultra-faint satellites. R. Schneider, L. Graziani, S. Marassi, et. al. Mon, 22 May 17 42/51 We present the results of very long baseline interferometry (VLBI) observations of gamma-ray bright blazar S5 0716+714 using the Korean VLBI Network (KVN) at the 22, 43, 86, and 129 GHz bands, as part of the Interferometric Monitoring of Gamma-ray Bright AGNs (iMOGABA) KVN key science program. Observations were conducted in 29 sessions from January 16, 2013 to March 1, 2016, with the source being detected and imaged at all available frequencies. In all epochs, the source was compact on the milliarcsecond (mas) scale, yielding a compact VLBI core dominating the synchrotron emission on these scales. Based on the multi-wavelength data between 15 GHz (Owens Valley Radio Observatory) and 230 GHz (Submillimeter Array), we found that the source shows multiple prominent enhancements of the flux density at the centimeter (cm) and millimeter (mm) wavelengths, with mm enhancements leading cm enhancements by -16$\pm$8 days. The turnover frequency was found to vary between 21 to 69GHz during our observations. By assuming a synchrotron self-absorption model for the relativistic jet emission in S5 0716+714, we found the magnetic field strength in the mas emission region to be $\le$5 mG during the observing period, yielding a weighted mean of 1.0$\pm$0.6 mG for higher turnover frequencies (e.g., >45 GHz).
# embedding torus [closed] could anyone please help me? why is it impossible to embed a torus in R^3 with index 1 ( usual euclidean space with index 1 as a semi-riemannian manifold) as a semi-riemannian submanifold? thanx. but i didn't understand. let me ask another question. why with this metric R^3 doesn't have any compact semi-riemannian submanifold? - ## closed as off-topic by Ricardo Andrade, Andrey Rekalo, Daniel Moskovich, Andrés Caicedo, BS.Sep 22 '13 at 7:24 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question does not appear to be about research level mathematics within the scope defined in the help center." – Ricardo Andrade, Andrey Rekalo, Daniel Moskovich, Andrés Caicedo If this question can be reworded to fit the rules in the help center, please edit the question. ## 1 Answer I suppose that by "usual" you mean $\mathbb{R}^3$ with the semi-Riemannian metric $dx^2 + dy^2 - dz^2$? If so, for any compact surface $S$ embedded in $\mathbb{R}^3$, since the metric is invariant under translation on $\mathbb{R}^3$ you are allowed to translate $S$ so that it is tangent to the light cone $x^2 + y^2 - z^2 = 0$. At any point of tangency the restricted metric is indefinite. To do this translation, first translate $S$ so that it is strictly inside the light cone, then let $S$ gently descend until the first moment that it touches the light cone. -
Home » Aptitude » Speed, Time and Distance » Question Speed, Time and Distance 1. A train met with an accident 3 hours after starting, which detains it for one hour, after which it proceeds at 75% of its original speed. It arrives at the destination 4 hours late. Had the accident taken place 150 km further along the railway line, the train would have arrived only 3 1 hours late. 2 Find the length of the trip and the original speed of the train. 1. 1100 km ; 100 kmph 2. 1200 km ; 100 kmph 3. 1200 km ; 90 kmph 4. 1600 km ; 90 kmph Correct Option: B Let A be the starting point, B the terminus. C and D are points where accidents take place. ∴ 0.75 = 3 4 By travelling a 3/4 of its original speed, the train would take 4/3 of its usual time i.e., 1/3 more of the usual time. ∴  1/3 of the usual time taken to travel the distance CB. = 4 – 1 = 3 hrs.     ...(i) and 1/3 of the usual time taken to travel the distance DB = 3 1 − 1 = 2 1 hrs.   ...(ii) 2 2 Subtracting equation (ii) from (i) we can write, 1/3 of the usual time taken to travel the distance CD = 3 − 2 1 = 1 hr. 2 2 ∴  Usual time taken to travel CD (150 km) = 1 2 1 3 = 3 hr. 2 Usual speed of the train = 3 = 9 hrs. 1 3 Total time = 3 + 9 = 12 hrs. ∴  Length of the trip = 12 × 100 = 1200 km.
WA 10 Vectors DotandCross Products.pdf - Name Math 172 Spring 2019 Written Assignment 10 Vectors Dot Product Cross Product 1 Define the points(\u22123 # WA 10 Vectors DotandCross Products.pdf - Name Math 172... • 3 This preview shows page 1 - 3 out of 3 pages. Name ________________________ Math 172 Spring 2019 Written Assignment 10: Vectors, Dot Product, Cross Product 1. Define the points ?(−3, −1), ?(−1, 2), ?(1, 2), ?(3, 5), ?(4, 2) and ?(6, 4) . (a) Sketch ?? ⃗⃗⃗⃗⃗ , ??, ⃗⃗⃗⃗⃗⃗ and ?? ⃗⃗⃗⃗⃗ and the corresponding position vectors. (b) Find the equal vectors among ?? ⃗⃗⃗⃗⃗ , ?? ⃗⃗⃗⃗⃗ and ?? ⃗⃗⃗⃗⃗ 2. Given ? = ⟨4, −3, 0⟩ and ? = ⟨1, 2, 3⟩ , (a) Find 3? + 2? (b) Find |3? + 2?| (c) Find the unit vector in the direction of 3? + 2? . Name ________________________ 3. Compute the dot product of the vectors ? and ? , and find the angle between the vectors: ? = −3? + 4?, ? = −4? + ? + 5? . 4. Find the scalar component of ? in the direction of ? , scal v ? and the orthogonal projection of ? onto ? , proj v ? , for the vectors ? = −3? + 4?, ? = −4? + ? + 5? . Name ________________________ 5. Compute the cross product ⟨4, 1, −2⟩ × ⟨1, 1, 0⟩ . 6. Find the area of the triangle whose vertices are the points ?(1, 0, 3), ?(5, 0, −1) and ?(0, 2, −2) .
# Numerically calculating the integral of the discrepancy between the empirical and theoretical distribution functions Having a sample of observations, $$x_i, i=1,..,n$$, to test the null hypothesis stating that the data follows a specific distribution $$F_0 (x)$$, the Cram\'{e}r-von-Mises goodness-of-fit test statistic is, $$$$\label{e1} \omega^2_n=n\int_{-\infty}^{+\infty} \left ( {\hat {F}_n(x)} - F_0 (x)\right )^2 {\mathrm {d}}F_0(x),$$$$ where $$\hat {F}_n(x)$$ is the empirical distribution function. If the test distribution $$F_0$$ has unknown parameter(s), the statistic $$\omega_n ^{2}$$ is estimated as [e.g., \cite{Evans2008}], $$$$\label{e2} \hat{\omega}_n ^{2}=\frac {1}{12n}+\sum _{i=1}^{n}\left[{\frac {2i-1}{2n}}-{\hat F}_0(x_{i:n})\right]^{2},$$$$ where $$x_{1:n}$$, $$x_{2:n}$$, $$\cdots$$, $$x_{n:n}$$ are sample order statistics. On the other hand, in some literature the statistic \cite{Cramer1928}, known as $$L_2-$$ distance, $$$$\label{e3} L_2=\int_{-\infty}^{+\infty} \left ( {\hat {F}_n(x)} - {\hat F}_0(x)\right)^2 {\mathrm {d}}x,$$$$ has been used as a goodness-of-fit criteria to test the above mention null hypothesis, which measures the discrepancy between the empirical and theoretical distribution functions. This distance has the disadvantage that it is not distribution-free; thus if we want to apply this distance for testing goodness-of-fit, then the critical values depend on $$F_0$$. My question now is how to calculate (using a program in R) this test statistics and even more difficult question how to calculate its corresponding p-value? To be more specific consider the following simulated sample > set.seed(12345) > x<-rexp(30,rate=2) > hatLambda<-1/mean(x) > hatLambda [1] 1.692009 #so we fit Exp. dist. with this rate to the observed sample in x Now I want to calculate the $$L_2-$$ distance, which show the distance between fitted exponential distribution and the empirical distribution, numerically. As asked before how to calculate the corresponding p-value (second priority)? To more point please 1) I am already aware that calculating $$\hat{\omega}_n ^{2}$$ is so simple, but the reason for trying more difficult task when calculating $$L_2-$$ distance is comparing my results with some already available results. 2) when i hopefully get the answer, I will calculate $$L_2-$$ distance for testing available data against more complicated distributions like Pareto, lognormal, etc. Many thanks in advance.
# Interpolate discrete points to create a 3D surface and highlight points I have some discrete points in 3D: https://www.dropbox.com/s/syl7w3s30tc30b9/amycoo.xlsx?dl=0 I have the following problem: 1. I want to create a 3D surface from those points and need to somehow interpolate the "outer" points. 2. I want to make this shape/surface a bit transparent and highlight some points on or in the interior of it, e.g. an arrow pointing at that point with a text label showing a value. To make this more clear: Let's say the points describe a brain area and I want to create a continuous version of the shape of this area, but also highlight specific points which represent the result of some data analysis, i.e. select the first point which has a value of 10. ListPointPlot3D was a first start, but then I get stuck and hope for some guidance. Thanks! • Is something like ConvexHullMesh what you're looking for w.r.t. #1? – N.J.Evans Mar 29 '17 at 13:48 • Maybe look into ListSurfacePlot3D[] as well... – J. M.'s torpor Mar 29 '17 at 14:05 Here's a simple example: (*This is just me getting your data into an array of triples since I copy-pasted it.*) data = ToExpression /@ Partition[StringSplit[a, WhitespaceCharacter], 3]; (*now data looks like {{x,y,z},{x,y,z},...}*) You can use Show with ConvexHullMesh and Graphics3D for annotation, and HighlightMesh to change the style. I just plotted a point at the centroid and put an arrow pointing to that spot as an example: hull = ConvexHullMesh@data; Show[ HighlightMesh[hull, Style[2, Opacity[0.5]]], Graphics3D[{ Red, Point[RegionCentroid@hull], Arrow[{# + {5, 5, 5}, #}] &@(RegionCentroid@hull) }] ] Which produces the following: • Wow! Thanks, thats exactly what I wanted! Two follow-up questions though: 1. How can I add a text at the beginning of the arrow? 2. Can I somehow get a list of the points that only lie on the outer surface? – holistic Mar 29 '17 at 14:57 • MeshCoordinates@hull will give you the coordinates of the vertices, and you can use Graphics3D[{Text["label",{x,y,z}],Arrow[...]...}]. That will be fairly basic, if you want something nicer, I'd suggest searching the site for info about annotating Graphics3D I'm pretty certain that's been discussed before. – N.J.Evans Mar 29 '17 at 16:12 • Great, thanks again! – holistic Mar 29 '17 at 16:13
# Congruence theorem In planar geometry, a congruence theorem is a statement that can be used to easily prove the congruence of triangles . Triangles are congruent (congruent) if they are the same in shape and area . The triangular congruence (i.e. the congruence of triangles) forms an equivalence relation , that is, congruent triangles can be regarded as equal. ## Congruence clauses In the usual designations of the four congruence sentences, "S" stands for a side length and "W" for an angle: SSS sentence (first congruence sentence) Two triangles that match in their three side lengths are congruent. SWS sentence (second congruence sentence) Two triangles that match in two side lengths and in the included angle are congruent. WSW theorem (third congruence theorem) Two triangles that coincide in one side and in the angles adjacent to this side are congruent. In addition to the theorem of the sum of the interior angles in the triangle, this also includes the following theorem: SWW set Two triangles which coincide in one side length, an angle lying on this side and the angle opposite this side are congruent. Note: However, two triangles that coincide in two angles and in one side length are not necessarily congruent, if it is not known which of the given angles lie on the given side. Using information on one side and two angles, three generally non-congruent triangles can be constructed, depending on whether the first, second or both angles are adjacent to the side. SSW sentence (fourth congruence sentence) Two triangles that coincide in two lengths and in the angle opposite the longer side are congruent. The restriction to a SSW sentence that does not exist in general is expressed by a corresponding spelling or identification (e.g. SsW, Ssw or SSW g , see the figure below ). If two triangles match in two or all three interior angles , they are not necessarily congruent. However, they are similar . The following figure shows the sizes in which two triangles must match for each of the four congruence sets. From left to right: SSS, WSW, SWS, SSW. ## proofs Classically, one proves the congruence theorems by specifying constructions with compasses and ruler, which construct a second triangle from the corresponding given sizes. If this only works in one way, the two triangles are congruent. With designations as in the figure above, this goes as follows: SSS Given , and . Plot a stretch of length ; the circle with radius and the order of radius intersect at two points and , thereby, two mirror-symmetrical (that is congruent) triangles and yield. If you commit to an orientation, the triangle is even clear. This also applies to the following constructions:${\ displaystyle a}$${\ displaystyle b}$${\ displaystyle c}$${\ displaystyle BC}$${\ displaystyle a}$${\ displaystyle C}$${\ displaystyle b}$${\ displaystyle B}$${\ displaystyle c}$${\ displaystyle A_ {1}}$${\ displaystyle A_ {2}}$${\ displaystyle A_ {1} BC}$${\ displaystyle A_ {2} BC}$ WSW Given , and . Plot a stretch of length ; the half-line (the ray), which encloses with the angle , and the one, which encloses with the angle , intersect at a point .${\ displaystyle a}$${\ displaystyle \ beta}$${\ displaystyle \ gamma}$${\ displaystyle BC}$${\ displaystyle a}$${\ displaystyle B}$${\ displaystyle BC}$${\ displaystyle \ beta}$${\ displaystyle C}$${\ displaystyle BC}$${\ displaystyle - \ gamma}$${\ displaystyle A}$ SWS Given , and . Two half-lines (rays), associated with a vertex angle include wearing the length and from to and to find.${\ displaystyle a}$${\ displaystyle b}$${\ displaystyle \ gamma}$${\ displaystyle C}$${\ displaystyle \ gamma}$${\ displaystyle b}$${\ displaystyle a}$${\ displaystyle A}$${\ displaystyle B}$ SSW congruence theorem SSW Given , and (where ). Construct two half-lines that include the angle as vertices ; mark the shorter distance on one leg to find; the circle around with radius intersects the other leg at one point .${\ displaystyle b}$${\ displaystyle c}$${\ displaystyle \ gamma}$${\ displaystyle c> b}$${\ displaystyle C}$${\ displaystyle \ gamma}$${\ displaystyle b}$${\ displaystyle A}$${\ displaystyle A}$${\ displaystyle c}$${\ displaystyle B}$ The picture on the right shows that the angle on the SSW sentence must be opposite the longer side . Otherwise one would have triangles that match in three parts (SSW), but are not congruent: The two triangles and match in terms of side lengths and as well as in angle . The side lengths and differ, however. ${\ displaystyle A_ {1} BC}$${\ displaystyle A_ {2} BC}$${\ displaystyle a}$${\ displaystyle c}$${\ displaystyle \ gamma = \ angle ACB}$${\ displaystyle b_ {1}}$${\ displaystyle b_ {2}}$ ## Remarks • In Hilbert's system of axioms of Euclidean geometry , SWS has the rank of an axiom , the others are proven from this and the other axioms. Hilbert recognized this as necessary because Euclid's ideas of proof were used in the traditional structure that could not be derived purely logically from his axioms and postulates, but based on the vividly plausible free mobility of the triangles. • Under certain circumstances it is also possible to construct a triangle from three other determinants, under which, for example, the incircle radius, the circumference radius, the area or the heights occur. However, the associated congruence statements are not counted among the classic congruence clauses. • In the spherical geometry , the situation differs partially. So there two (spherical) triangles are already congruent and not just similar if they match in the three interior angles. The specification of the third angle is also no longer redundant ( spherical excess ). ## Congruence Proofs The four theorems of congruence form the basis of a proof procedure that is often used in elementary geometry: In a proof of congruence , one justifies the equality of two line lengths or two angular quantities by first showing the congruence of two suitable triangles and then deducing the equality of the corresponding side lengths or angles . ## Individual evidence 1. Hartmut Wellstein: Elementary Geometry . Vieweg + Teubner, Wiesbaden 2009, ISBN 978-3-8348-0856-1 , pp. 12 .
lagrange multiplier for more than 2 equality constraints i couldn't do the following question for hours minimize $\sum_{i=1}^{n}x_{i}^{3}$ s.t. $\sum_{i=1}^{n}x_{i}=0$ and $\sum_{i=1}^{n}x_{i}^{2}=n$. by Lagrange multiplier rule ? - I am sure there is an example of using Lagrange multipliers with two constraints in pretty much any calculus textbook (Google gives me, for the obvious query, a list of results such that I find such examples in 4 of the first 5 results) Try to follow any such example out there, explain to us what you did and then we can help you with what you were not able to do. –  Mariano Suárez-Alvarez Oct 20 '10 at 19:56 i made my lagrange function as a0(x1^3+...+xn^3)+a1(x1+...+xn)+a2(x1^2+...xn^2) where a0, a1, a2 are lambda 0, 1, 2 then i take partial derivative for xi's (dL/dxi)=a0(3xi^2)+a1+2a2(xi)=0 –  nur Oct 20 '10 at 20:05 i made my lagrange function as a0(x1^3+...+xn^3)+a1(x1+...+xn)+a2(x1^2+...xn^2) where a0, a1, a2 are lambda 0, 1, 2 then i take partial derivative for xi's (dL/dxi)=a0(3xi^2)+a1+2a2(xi)=0 i now i have n such equations and 2 constraints but i cannot eliminate a1 and a2 to get a relation between xi's. so what can i do? –  nur Oct 20 '10 at 20:11 Your question $\min f(x_{1},x_{2},\ldots ,x_{n})=\sum_{i=1}^{n}x_{i}^{3}$ s.t. $% \sum_{i=1}^{n}x_{i}=0$ and $\sum_{i=1}^{n}x_{i}^{2}=n$ is equivalent to find $\min f(x_{1},x_{2},\ldots ,x_{n-1})=-\left( \sum_{i=1}^{n-1}x_{i}\right) ^{3}+\sum_{i=1}^{n-1}x_{i}^{3}$ s.t. $\left( \sum_{i=1}^{n-1}x_{i}\right) ^{2}-n+\sum_{i=1}^{n-1}x_{i}^{2}=0.$. –  Américo Tavares Oct 20 '10 at 20:18 Why don't you explain that in the body of the question, writing down the system of equations you got in detail, and change the title to "how do I solve this system of equations?". From what you wrote, it is clear that you don't have any problems with Lagrange multipliers :) –  Mariano Suárez-Alvarez Oct 20 '10 at 20:19 This is a development of my comment, changing the restraints from two to one. Since $x_{n}=-\sum_{i=1}^{n-1}x_{i}$, the question "find $$\min \sum_{i=1}^{n}x_{i}^{3}$$ s.t. $$\sum_{i=1}^{n}x_{i}=0$$ and $$\sum_{i=1}^{n}x_{i}^{2}=n\ "$$ is equivalent to find $$\min f(x_{1},x_{2},\ldots ,x_{n-1}),$$ where $f(x_{1},x_{2},\ldots ,x_{n-1})=\left( \sum_{i=1}^{n-1}x_{i}^{3}\right) -\left( \sum_{i=1}^{n-1}x_{i}\right) ^{3}$ s.t. $$c(x_{1},x_{2},\ldots ,x_{n-1})=0,$$ where $$c(x_{1},x_{2},\ldots ,x_{n-1})=\left( \sum_{i=1}^{n-1}x_{i}\right) ^{2}-n+\sum_{i=1}^{n-1}x_{i}^{2}=0.$$ The conditions regarding first derivatives are $$\nabla f(x_{1},x_{2},\ldots ,x_{n-1})-\nabla c(x_{1},x_{2},\ldots ,x_{n-1})\lambda =0\qquad (\ast ),$$ where $$\nabla f(x)=\begin{pmatrix}\frac{\partial f}{\partial x_{1}} & \ldots & \frac{\partial f}{\partial x_{n-1}}\end{pmatrix}^{T}$$ $$=% \begin{pmatrix} 3x_{1}^{2}-3\left( \sum_{i=1}^{n-1}x_{i}\right) ^{2} & \ldots & 3x_{n-1}^{2}-3\left( \sum_{i=1}^{n-1}x_{i}\right) ^{2}% \end{pmatrix}% ^{T}$$ $$\nabla c(x)=% \begin{pmatrix} \frac{\partial c}{\partial x_{1}} & \ldots & \frac{\partial c}{\partial x_{n-1}}% \end{pmatrix}% ^{T}$$ $$=% \begin{pmatrix} 2\left( \sum_{i=1}^{n-1}x_{i}\right) +2x_{1} & \ldots & 2\left( \sum_{i=1}^{n-1}x_{i}\right) +2x_{n-1}% \end{pmatrix}% ^{T}.$$ Condition $(\ast )$ is $$3x_{1}^{2}-3\left( \sum_{i=1}^{n-1}x_{i}\right) ^{2}-2\lambda \left( \sum_{i=1}^{n-1}x_{i}\right) -\lambda 2x_{1}=0$$ $$\ldots$$ $$3x_{n-1}^{2}-3\left( \sum_{i=1}^{n-1}x_{i}\right) ^{2}-2\lambda \left( \sum_{i=1}^{n-1}x_{i}\right) -\lambda 2x_{n-1}=0.$$ All these equations have the same form (the coefficients of $x_k$, $1\lt k\lt n-1$, are independent of $k$). Thus the solution is atained when all the $n-1$ variables are equal $$x_{1}=\ldots =x_{n-1}=x^{\ast }$$ Hence, we have to solve the new system of two equations $$3(x^{\ast })^{2}-3\left( \sum_{i=1}^{n-1}x^{\ast }\right) ^{2}-2\lambda \left( \sum_{i=1}^{n-1}x^{\ast }\right) -2\lambda x^{\ast }=0\qquad (\ast \ast )$$ $$\left( \sum_{i=1}^{n-1}x^{\ast }\right) ^{2}-n+\sum_{i=1}^{n-1}(x^{\ast })^{2}=0.$$ We obtain, successively $$3(x^{\ast })^{2}-3n+3\sum_{i=1}^{n}(x^{\ast })^{2}-2\lambda \left( \sum_{i=1}^{n-1}x^{\ast }\right) -2\lambda x^{\ast }=0$$ $$3(x^{\ast })^{2}-3n+3(x^{\ast })^{2}(n-1)-2\lambda x^{\ast }(n-1)-2\lambda x^{\ast }=0$$ $$(x^{\ast })^{2}(n-1)-1=0$$ $$x^{\ast }=x_{1}=\ldots =x_{n-1}=\pm \frac{1}{\sqrt{n-1}}.$$ Therefore $$x_{n}=-\sum_{i=1}^{n-1}x_{i}=\mp \frac{n-1}{\sqrt{n-1}}.$$ Since for $n>2$ $$\left( \frac{1}{\sqrt{n-1}}\right) ^{3}-\left( \frac{n-1}{\sqrt{n-1}}% \right) ^{3}<-\left( \frac{1}{\sqrt{n-1}}\right) ^{3}+\left( \frac{n-1}{% \sqrt{n-1}}\right) ^{3},$$ the minimum is obtained at $$x^{\ast }=x_{1}=\ldots =x_{n-1}=\frac{1}{\sqrt{n-1}},x_{n}=-\frac{n-1}{% \sqrt{n-1}}$$ and its value is equal to $$\left( \frac{1}{\sqrt{n-1}}\right) ^{3}-\left( \frac{n-1}{\sqrt{n-1}}% \right) ^{3}.$$ Added: The original problem in $n$ variables is the solution of $n$ equations $$\nabla f(x_{1},x_{2},\ldots ,x_{n})-\nabla c(x_{1},x_{2},\ldots ,x_{n})\lambda =0\qquad (\ast\ast\ast )$$ plus the two conditions $$c_{1}(x_{1},x_{2},\ldots ,x_{n})=0$$ $$c_{2}(x_{1},x_{2},\ldots ,x_{n})=0,$$ where $$\nabla f=% \begin{pmatrix} \frac{\partial f}{\partial x_{1}} & \ldots & \frac{\partial f}{\partial x_{n}}% \end{pmatrix}% ^{T}$$ $$\nabla c=% \begin{pmatrix} \left( \nabla c_{1}\right) ^{T} & \left( \nabla c_{2}\right) ^{T}% \end{pmatrix}%$$ $$\lambda =\begin{pmatrix} \lambda _{1} & \lambda _{2}\end{pmatrix}^{T}.$$ Note: The equations would be similar for $m$ constraints $c_1,\dots ,c_m.$ Remark: Since I don't know how to write here matrices with more than one row, I wrote them in the transposed form. - Using Lagrange multipliers we get $x_i^2 = \lambda x + \mu$, so that $x_i$ can take only two values, say $X > 0$ and $Y < 0$ (we can't have $X=Y=0$ for obvious reasons). Suppose that $X$ is taken $a$ times, and $Y$ is taken $b = n-a$ times. We have $aX + bY = 0$ $aX^2 + bY^2 = n$ You can check that the solution is $X^2 = b/a, Y^2 = a/b$ The objective function that we want to minimize is $aX^3 + bY^3 = b\sqrt{b/a} - a\sqrt{a/b}$ We want to make the first summand small and the second big. Note that as we increase $a$, the first summand becomes smaller, and the second one becomes bigger, so we might as well choose $a=n-1$ and $b=1$. So $X = 1/\sqrt{(n-1)}$ and $Y = -\sqrt{n-1}$. Concluding, the optimum is obtained with the sequence $-\sqrt{n-1}, 1/\sqrt{(n-1)} \times (n-1)$, and the optimal value is $-(n-1)\sqrt{n-1} + 1/\sqrt{n-1}$ - Films: It seems to me there's a typo in the first equation of $aX+bY=0$ $aX^{2}+bY^{2}=0$. It should be $aX^{2}+bY^{2}=n$. Since $aX^{2}+bY^{2}=a\frac{b}{a}+b\frac{a}{b}=b+a=n$ and $a^{2}X^{2}+b^{2}Y^{2}+2abXY=a^{2}\frac{b}{a}+b^{2}\frac{a}{b}-2ab=ab+ab-2ab=0$ the system has the solutions you present $X^{2}=\frac{b}{a}$ and $Y^{2}=\frac{a}{b}$. –  Américo Tavares Oct 22 '10 at 23:08 +1 for the clever solution. –  Américo Tavares Oct 23 '10 at 1:10 @AméricoTavares could you help me please to solve this problem math.stackexchange.com/questions/1300337/… –  user2987 May 27 at 0:55 @user2987 I would help but unfortunately your question is not an easy one for me. –  Américo Tavares May 27 at 11:35
## Calculus: Early Transcendentals 8th Edition $(a+b)\times c=a \times c+b \times c$ Let $a= a_1i+a_2j+a_3k$; $b=b_1i+b_2j+b_3k$ and $c=c_1i+c_2j+c_3k$ $(a+b)\times c=\begin{vmatrix} i&j&k \\ a_1+b_1&a_2+b_2&a_3+b_3\\c_1&c_2&c_3\end{vmatrix}$ Using properties of determinants, we can write $(a+b)\times c=\begin{vmatrix} i&j&k \\ a_1&a_2&a_3\\c_1&c_2&c_3\end{vmatrix}+ \begin{vmatrix} i&j&k \\b_1&b_2&b_3\\c_1&c_2&c_3\end{vmatrix}$ But, $\begin{vmatrix} i&j&k \\ a_1&a_2&a_3\\c_1&c_2&c_3\end{vmatrix}=a \times c$ $\begin{vmatrix} i&j&k \\b_1&b_2&b_3\\c_1&c_2&c_3\end{vmatrix}=b \times c$ Thus, $\begin{vmatrix} i&j&k \\ a_1&a_2&a_3\\c_1&c_2&c_3\end{vmatrix}+ \begin{vmatrix} i&j&k \\b_1&b_2&b_3\\c_1&c_2&c_3\end{vmatrix}=a \times c+b \times c$ Hence, $(a+b)\times c=a \times c+b \times c$
## Grammars with context conditions and their applications.(English)Zbl 1087.68045 Hoboken, NJ: John Wiley & Sons (ISBN 0-471-71831-9/hbk; 978-0-471-73656-1/ebook). ix, 216 p. (2005). The study of grammars with context conditions is an important area of formal language theory. Up to now results in this area have been scattered in various journal papers. It is the aim of this book, to systematically summarize the concepts and results of this field. In chapter 2 an introduction to formal languages and grammars is given. In chapter 3 context conditions placed on derivation domains are treated: A wm-grammar is a pair $$(G, W)$$ where $$G = (V, T, P, S)$$ is a context-free grammar (with terminals $$T$$, nonterminals $$V-T$$, $$S\in V-T$$, and rule set $$P$$) and $$W$$ a finite language over $$V$$. $$(G, W)$$ is of degree $$i$$ (a natural number), if $$y\in W$$ implies $$|y| \leq i$$. The direct derivation $$\Rightarrow_{(G,W)}$$ on $$W^*$$ is defined as follows: If $$A\to y\in P$$, $$xAz$$, $$xyz\in W^*$$, then $$x Az\Rightarrow^*_{G,W)} xyz$$. Of course $$L(G,W) = \{w\in T^*: S\Rightarrow^*_{(G,W)} w\}$$. $$(G, W)$$ is called propagating, if $$A\to y\in P$$ implies $$y\neq\varepsilon$$. Let WM$$(i)$$ and prop-WM$$(i)$$ be the families of languages generated by wm-grammars of degree $$i$$ and by propagating wm-grammars of degree $$i$$ respectively. The main results proved are: prop-WM(2) is the family of context-sensitive languages; WM(2) is the family of recursively enumerable languages. Additionally, parallel wm-grammars are studied. The concept is similar to wm-grammars, but $$G$$ has to be an E0L grammar. Similar results are proved. In chapter 4 grammars with context conditions placed on the use of productions are studied. To give a typical result, let first, for $$u\in V^*$$ and $$U\subseteq V^*$$, $$|U| < \infty$$, $$\text{sub}(u)$$ be the set of all substrings of $$u$$ and $$\max(U)$$ be the length of the longest word in $$U$$, respectively. A context-conditional grammar is a quadruple $$G= (V, T, P, S)$$, where $$V, T, S$$ are defined as above, and $$P$$ is a finite set of productions of the form $$(A\to x$$, Per, For), $$A\to x$$ as above and Per, For $$\subseteq V^+$$. $$G$$ has degree $$(r,s)$$ if, for every $$(A\to x,\text{Per,\,For})\in P$$, $$\max(\text{Per})\leq r$$ and $$\max(\text{For}) \leq s$$. $$G$$ is called propagating, if $$(A\to x,\text{Per,\,For})\in P$$ implies $$x\neq \varepsilon$$. The direct derivation $$\Rightarrow_G$$ on $$V^*$$ is defined as follows: $$u\Rightarrow_G v$$, if there are $$u_1,u_2\in V^*$$, $$(A \to x,\text{Per,\,For})\in P$$, such that $$u = u_1Au_2$$, $$v = u_1xu_2$$, $$\text{Per}\subseteq \text{sub}(u)$$, and $$\text{For}\cap \text{sub}(u) =\emptyset$$. The families of languages generated by context-conditional grammars and propagating context-conditional grammars of degree $$(r,s)$$ are denoted by CG$$(r,s)$$ and $$\text{prop-CG}(r,s)$$ respectively. Let $\text{CG}= \bigcup_{r,s\geq 0}\text{CG}(r,s),\quad\text{prop-CG}= \bigcup_{r,s\geq 0}\text{prop-CG}(r,s).$ It is proved that prop-CG is the family of context-sensitive languages and CG is the family of recursively enumerable languages. Similar results are proved for special cases of context-conditional grammars. Again, also parallel context-conditional grammars are studied. Chapter 5 deals with a third type of context conditions: context conditions placed on the neighbourhood of rewritten symbols, including classical context-dependent grammars but also scattered context grammars in which rewriting depends on symbols scattered throughout the word to be rewritten. Again both sequential and parallel grammars are studied. Chapter 6 studies grammatical transformations of grammars with context conditions to some other equivalent grammars so that both the input grammar and the transformed grammars generate their languages in a very similar way. Chapter 7 demonstrates the use of grammars with context-conditions by several applications related to biology. The monograph reflects the state of the art in grammars with context conditions. ### MSC: 68Q42 Grammars and rewriting systems 68-02 Research exposition (monographs, survey articles) pertaining to computer science 68Q45 Formal languages and automata ### Keywords: grammars; context-conditions; string-rewriting Full Text:
# How do you divide (-2-5i)/(3i)? Feb 13, 2016 Solution is $- \frac{5}{3} + \left(\frac{2}{3}\right) i$ #### Explanation: To solve (−2−5i)/(3i) multiply numerator and denominator by $i$ (note ${i}^{2} = - 1$) and we get (i*(−2−5i))/(i*(3i) or $\frac{- 2 i - 5 {i}^{2}}{3 {i}^{2}}$ or $\frac{- 2 i - 5 \left(- 1\right)}{3 \cdot - 1}$ i.e. $\frac{5 - 2 i}{-} 3$, which is equivalent to $- \frac{5}{3} + \left(\frac{2}{3}\right) i$ Feb 13, 2016 $\frac{2}{3} i - \frac{5}{3}$ #### Explanation: The denominator of the fraction is required to be real. To achieve this multiply the numerator and denominator by 3i. $\frac{- 2 - 5 i}{3 i} \times \frac{3 i}{3 i}$ $= \frac{- 6 i - 15 {i}^{2}}{9 {i}^{2}}$ [Note: ${i}^{2} = {\left(\sqrt{- 1}\right)}^{2} = - 1$] $= \frac{- 6 i + 15}{-} 9 = \frac{2}{3} i - \frac{5}{3}$
# Building Fire Stations Time Limit: 5 Seconds Memory Limit: 131072 KB ## Description Marjar University is a beautiful and peaceful place. There are N buildings and N - 1 bidirectional roads in the campus. These buildings are connected by roads in such a way that there is exactly one path between any two buildings. By coincidence, the length of each road is 1 unit. To ensure the campus security, Edward, the headmaster of Marjar University, plans to setup two fire stations in two different buildings so that firefighters are able to arrive at the scene of the fire as soon as possible whenever fires occur. That means the longest distance between a building and its nearest fire station should be as short as possible. As a clever and diligent student in Marjar University, you are asked to write a program to complete the plan. Please find out two proper buildings to setup the fire stations. ## Input There are multiple test cases. The first line of input contains an integer T indicating the number of test cases. For each test case: The first line contains an integer N (2 <= N <= 200000). For the next N - 1 lines, each line contains two integers Xi and Yi. That means there is a road connecting building Xi and building Yi (indexes are 1-based). ## Output For each test case, output three integers. The first one is the minimal longest distance between a building and its nearest fire station. The next two integers are the indexes of the two buildings selected to build the fire stations. If there are multiple solutions, any one will be acceptable. ## Sample Input 2 4 1 2 1 3 1 4 5 1 2 2 3 3 4 4 5 ## Sample Output 1 1 2 1 2 4 None None
# My favorite bookmarklets Bookmarklets to search a website, navigate it, and see what links to it. The recent lifehacker article Ten Must-Have Bookmarklets reminded me that I’ve developed a few handy ones myself. A bookmarklet (“bookmark” + “applet”) is a little bit of Javascript embedded in a link. They usually take some information about the page you’re looking at and do something useful with it. For example, if you highlight some text on this page and click this demo it displays the highlighted text in a message box. This particular example is not very useful, but it demonstrates how a bookmarklet can grab information from the displayed web page and do something with it. The following shows what’s really in the link: <a href="javascript:alert(document.getSelection())">this demo</a> Bookmarklets more complex than this one may define and call functions, but they’re still all packed into a a element’s href attribute. The lifehacker article has one that builds on this use of the getSelection() method by looking up the selected text in an acronym dictionary. Running bookmarklets on the page that contains them is rarely interesting, but when you keep these bookmarklets in your bookmarks menu (or, more likely, on your bookmarks toolbar), you can run them against anything, which is when they get valuable. You could drag that “this demo” link to your bookmarks toolbar and then highlight text on any web page, click that link, and see the highlighted text displayed in a message box, but you’d be better off dragging the more useful ones below to your toolbar: • site’s homepage goes right to a particular site’s homepage—for example, to http://www.snee.com from this page. • search site lets you search the website of the displayed page with a minimum of keystrokes. Click it to display a search form with one field (for example, this form if you were looking at the lifehacker article), fill out that field with a string to search for, click “Go,” and Google searches that site for you. • cd .. goes to the parent directory of the displayed page’s directory.
SEARCH HOME Math Central Quandaries & Queries Question from Sugavanas, a student: If length of a rectangle exceeds its width by 5 m. if the width is increased by 1m and the length is decreased by 2 m, the area of the new rectangle is 4 sq m less than the area of the original rectangle. Find the dimensions of the original rectangle. Hi Sugavanas, The key to these word problems is to read them carefully and decide which quantities to express as variables. You are to find the dimensions (length and width) of the rectangle so let $L$ be the width of the rectangle and let $W$ be its width. Read the first sentence carefully. It says the length exceeds the width ny 5 m, or in terms of the variables $L$ is 5 m more than $W.$ As an equation that's $L = W + 5$ In the second sentence you are told to increase the width $W$ by 1 m and decrease the length $W + 5$ by 2 m. What are the new dimensions? The area of a rectangle is the length times the width so the area of the original rectangle is $(W + 5) \times W$ square meters. What is the area of the modified rectangle? This is 4 sq m less than $(W + 5) \times W.$ Solve for $W.$ Write back if you need more assistance, Penny Math Central is supported by the University of Regina and The Pacific Institute for the Mathematical Sciences.
# Relationship between ratios and averages of ratios? My colleague and I were wondering: For a weighted average of ratios - what weight would need to be assigned to each term in the expression such that the result of the weighted average was the same as the sum of the numerators in the ratios divided by the sum of the denominators in the ratios: $\displaystyle \frac{\sum_{i = 1}^{n} x_i }{\sum_{i = 1}^{n} y_i} = \frac{\sum_{i = 1}^{n} ( (\frac{x_i}{y_i}) ? ) }{n}$ In the equation above, what would the "?" need to be such that the equality holds? ## 1 Answer Well, I suppose this isn't a terribly insightful answer, but $$\frac{\sum_{i=1}^n x_i}{\sum_{i=1}^n y_i}=\frac{\sum_{i=1}^n\big(\!\frac{x_i}{y_i}\!\big)\cdot w_i}{n}$$ where $$w_i=\frac{ny_i}{\sum_{i=1}^ny_i},$$ because $$\frac{\displaystyle\sum_{i=1}^n\left(\frac{x_i}{y_i}\right)\cdot \left(\frac{ny_i}{\sum_{i=1}^ny_i}\right)}{n}=\displaystyle\sum_{i=1}^n\left(\frac{x_i}{y_i}\right)\cdot \left(\frac{y_i}{\sum_{i=1}^ny_i}\right)=\sum_{i=1}^n\left(\frac{x_i}{\sum_{i=1}^ny_i}\right)=\frac{\sum_{i=1}^n x_i}{\sum_{i=1}^n y_i}.$$ • Actually, this is EXACTLY what I was hoping for. And it actually is insightful for us... Thank you very much. – Steve Apr 26 '12 at 17:50 • Glad I could help! – Zev Chonoles Apr 26 '12 at 17:52
× # remember the preliminaries! 1)(a+b)^n - a^n - b^n is always divisible by ab for all n belongs to N. 2) a polynomial of odd degree will always have one of its roots to be either +1 or -1. Note by Raven Herd 2 years, 9 months ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: For statement 1), use the binomial theorem to find that $$(a + b)^{n} = a^{n} + \binom{n}{1}a^{n-1}b + \binom{n}{2}a^{n-2}b^{2} + .... + \binom{n}{n-1}ab^{n-1} + b^{n}.$$ After subtracting $$a^{n}$$ and $$b^{n}$$ the remaining terms are all divisible by $$ab,$$ and thus so is their sum. Statement 2) is not actually true. $$f(x) = x - 2$$ is of odd degree and only has root $$2.$$ - 2 years, 9 months ago sir , I have a problem . I very much want to study number theory but I am unable to grasp the concepts beyond modular multiplication . I earnestly want to learn those Fermat rules ,Euler theorem , CRT etc,etc, Please help. - 2 years, 9 months ago Have you tried the wikis here on Brilliant.org? For example, here is the one on the CRT. This can be found by choosing "Topics", followed by "Number Theory", "Modular Arithmetic" and then the 'open book' icon to the right of "Chinese Remainder Theorem". You can find the other topics in your list in this fashion as well. - 2 years, 9 months ago yes I have tried them. I devoted full 1 week and I was quite confident that I can learn it . But it didn't work. Please reply soon. P.S. Sir, an odd request : Where is the started problems option? I am hunting it since the set up of Brilliant has changed. - 2 years, 9 months ago I'm not sure what to suggest next. If you spent a week going through all the wikis then you probably know more than I do now about this topic. Sometimes it's just about practice, so keep trying Number Theory questions on Brilliant (and elsewhere) and maybe the concepts will eventually become second nature. Sorry I couldn't suggest something more helpful. :( As for the "Started Problems" option, I noticed it missing from the main page a few days ago as well. I eventually found it, though. Click on the "blue planet" icon and choose the "View mobile site" option. Once on that page, click on the "three dot" icon in the upper right corner and you will see the "Started problems" option listed there. Then, if you want, you can choose the "View full site" option on the same list to take your list of started problems back to the main site format. - 2 years, 9 months ago Sir ,you are too modest.I have decided that I will go through the wikis again.I believe I have not been sincere and Brilliant wikis are the best study material I have come across till now and the best part about them is that they are prepared by people who find a particular topic fascinating and thus eliminate the chances of errors greatly. Anyways, thanks for you consideration. - 2 years, 9 months ago You're welcome. Good luck with your studies. :) - 2 years, 9 months ago
# Line with predefined length tangent to circle I have one math problem which I'm trying to solve. I know it could be done but I'm a little bit "rusty" with my algebra. I'm kindly asking for help. Problem and procedure of my solution are shown in attached image. I'm trying to find general solution for line equation (points $1$ and $2$ lie on this line) which is tangent to circle with radius $R$ and center point in $(0,a)$. Length $L$ of line from point $1$ to point $2$ is also known. Basically I'm searching for coordinates of points $1$ and $2$ with $y_2=0$. I think that for $L>a$ there should be 4 possible solutions. I'm only interested in one which has $x_1>0$ and $x_2>0$. I have $4$ equations, which should be enough for solving this system, but it gets complicated. Maybe someone has some pointers or is there an easier way to do it? Here's a partial answer: recall that a tangent to a circle at $T$ from an external point $P$ is perpendicular to the radius at $T$. So using the Pythagorean Theorem, \begin{equation*} R^2+L^2 = d((x_2,0),(0,a))^2 = x_2^2 + a^2, \end{equation*} so that since you want $x_2>0$ we get $x_2 = \sqrt{R^2+L^2-a^2}$. Note that while there are four solutions, there are only two possible values of $x_2$, since the two tangents from any external point to a circle have equal lengths. Next, solve $x_1^2 + (y_1-a)^2 = r^2$ for $y_1$, giving $y_1 = a\pm\sqrt{r^2-x_1^2}$; again take the positive square root. Using the equation for the length of the tangent, we get \begin{equation*} (x_1-\sqrt{l^2+r^2-a^2})^2 + (a+\sqrt{r^2-x_1^2})^2 = l^2, \end{equation*} or \begin{equation*} (r^2+l^2)x_1^2 - 2r^2\sqrt{l^2+r^2-a^2}\,x_1 + r^4 - a^2r^2 = 0. \end{equation*} Solving the quadratic for $x_1$ gives \begin{equation*} x_1 = \frac{arl+r^2\sqrt{l^2+r^2-a^2}}{l^2+r^2}. \end{equation*} I don't see a simple way yet to derive $y_1$ from this, but the answer is \begin{equation*} y_1 = \frac{l}{r}x_1 = \frac{l(al+r\sqrt{l^2+r^2-a^2})}{l^2+r^2}. \end{equation*}
# Reducing dtype size of a Numpy/Pandas array I run into memory problem when processing very large dataframe. The problem is that Pandas use float64 and int64 numpy dtypes by default even in cases when it is totally unnecessary (you have e.g. only binary values). Furthermore, it is not even possible to change this default behaviour. Hence, I wrote a function which finds the smallest possible dtype for a specific array. import numpy as np import pandas as pd def safely_reduce_dtype(ser): # pandas.Series or numpy.array orig_dtype = "".join([x for x in ser.dtype.name if x.isalpha()]) # float/int mx = 1 for val in ser.values: new_itemsize = np.min_scalar_type(val).itemsize if mx < new_itemsize: mx = new_itemsize new_dtype = orig_dtype + str(mx * 8) return new_dtype # or converts the pandas.Series by ser.astype(new_dtype) So, e.g.: >>> import pandas >>> serie = pd.Series([1,0,1,0], dtype='int32') >>> safely_reduce_dtype(serie) dtype('int8') >>> float_serie = pd.Series([1,0,1,0]) >>> safely_reduce_dtype(float_serie) dtype('float8') # from float64 Using this you can reduce the size of your dataframe significantly up to factor 4. ## Update: There is pd.to_numeric(series, downcast='float') in Pandas 0.19. The above was written before it was out and can be used in old versions.
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Wildfires and the role of their drivers are changing over time in a large rural area of west-central Spain ## Abstract During the last decades, wildfires have been changing in many areas across the world, due to changes in climate, landscapes and socioeconomic drivers. However, how the role of these drivers changed over time has been little explored. Here, we assessed, in a spatially and temporally explicit way, the changing role of biophysical and human-related factors on wildfires in a rural area in west-central Spain from 1979 to 2008. Longitudinal Negative Binomial (NB) and Zero-Inflated Negative Binomial (ZINB) mixed models, with time as interacting factor (continuous and categorical), were used to model the number of fires of increasing size (≥1–10 ha, >10–100 ha, >100 ha) per 10 × 10 km cell per year, based on fire statistics. We found that the landscape was rather dynamic, and generally became more hazardous over time. Small fires increased and spread over the landscape with time, with medium and large fires being stable or decreasing. NB models were best for modelling small fires, while ZINB for medium and large; models including time as a categorical factor performed the best. Best models were associated to topography, land-use/land cover (LULC) types and the changes they underwent, as well as agrarian characteristics. Climate variables, forest interfaces, and other socioeconomic variables played a minor role. Wildfires were initially more frequent in rugged topography, conifer forests, shrublands and cells undergoing changes in LULC types of hazardous nature, for all fire sizes. As time went by, wildfires lost the links with the initial fire-prone areas, and as they spread, became more associated to lower elevation areas, with higher solar radiation, herbaceous crops, and large size farms. Thus, the role of the fire drivers changed over time; some decreased their explaining power, while others increased. These changes with time in the total number of fires, in their spatial pattern and in the controlling drivers limit the ability to predict future fires. ## Introduction During the last decades, wildfires have been changing in many areas across the world. While changes in climate are often the main factors1,2,3, changes in landscapes and socioeconomic factors are also associated to such variations4,5,6. That is particularly important in areas where humans are the major source of ignitions and of landscape change and its flammability. Hence, addressing how human factors have affected wildfires in recent times is important and challenging, due to its varying nature, together with continuing changes in other fire drivers. At regional and local scales, human factors play a major role in wildfires, overriding in part the role of climate7,8. Human factors exert a dual effect on fire regime, by either decreasing fires (e.g., suppression policy9) or increasing them (e.g., Land Use/Land Cover [LULC] changes10), and can dampen fire-climate relationship at different spatial and temporal scales8. In the European Mediterranean countries (EUMed), the collapse of traditional rural socioeconomic systems since the second half of the XXth century have caused important landscape and socioeconomic changes, increasing landscape flammability and fire risk10,11. Specifically, fire risk has increased at both the wildland-urban interfaces (WUIs) and wildland-agrarian interfaces (WAIs), where population and human infrastructures are in contact with forest areas and there is an intense competition between agricultural and forestry activities12,13,14. Nonetheless, the effects of those driving factors on wildfires vary across temporal and spatial scales8,15, requiring spatio-temporal models capable of simulating the spatial and temporal fire patterns. Several studies at EUMed have attempted to assess the ongoing evolution of the roles that biophysical conditions and human factors play in fire occurrence (presence/absence) and frequency (number of fires per unit area and time)16,17,18,19,20 from a stationary temporal point of view. In addition, other studies at EUMed have applied different models for various time periods and later comparing them over the longer term21,22,23,24,25. However, models that do not assume stationarity in the controlling variables (e.g., panel or longitudinal models) can be most useful26,27,28,29,30. On the other hand, to account for spatial heterogeneity (spatial non-stationarity), local Geographically Weighted Regression (GWR) has been commonly applied18,20,23. Nevertheless, to explicitly model spatial heterogeneity, hierarchical, multilevel modeling or the General Linear Mixed Models (GLMM) are also appropriate. These models have been successfully applied for predicting fire occurrence31,32,33. Finally, for dealing with high number of zero observations, zero-inflated models have been used for predicting fire spatial patterns4,34,35,36,37,38. In this work, we modeled the variation of wildfires in space and over time by applying longitudinal Negative Binomial (NB) and Zero-Inflated Negative Binomial (ZINB) mixed models over a thirty year period, using wildfires of three size categories (≥1–10 ha, >10–100 ha, >100 ha) that occurred in a large rural area in west-central Spain. The objective was to determine the relationship between occurrence and frequency of wildfires as a function of various factors, including topography, LULC types and their changes, socio-economic variables, forest interfaces, linear features and climate in a dynamical way. We expected that wildfires would have increased in the area and during the study period due to changes in some of the main fire drivers: i) an increase in landscape hazardousness as result of LULC changes (i.e., land abandonment, afforestation, etc.), ii) socio-economic restructuring in the area due to rural exodus (i.e., population decline, shifts in economic activities), and iii) changes in climatic conditions (i.e., warming). ## Methodology ### The Study area The study area covers 56,000 km2 in west-central Spain (UTM coordinates 4369–4551 and 201–394 30 N) (Fig. 1A). The area is characterized by the mountainous landscapes of Sierra de Gredos, and the gentler mountains at the SE, both flanked by relatively flat areas (Fig. 1B). The climate is Mediterranean, with colder and wetter conditions up in the mountainous areas, and warmer and drier at the low-lands (Fig. 1C). Soils in the mountain areas are shallow, with high stoniness and coarse texture (cambisol, regosol and lithosol), whereas, in the low-lands, they are deep and fine textured (luvisol and fluvisol) (Fig. 1D). ### Fire data and explanatory variables We used the yearly number of fires recorded at 10 × 10 km grid cells (n = 443) of the Spanish Fire Statistics database (EGIF, Ministry for the Ecological Transition), from 1979 to 2008 (30 years, n = 13,290 cells [443 × 30]). We selected fires ≥1 ha, and defined three fire size categories: small (≥1 to 10 ha); medium (>10–100 ha) and large (>100 ha). Most fires were lighted by people. The variables used to explain number of fires per cell per year included: topography (elevation, slope, radiation), landscape features (i.e., LULC [land-use-land cover] types and their changes [i.e., afforestation, agricultural abandonment, etc.], forest interfaces [wildland-urban interface, WUI; wildland-agrarian interface, WAI; wildland-grassland interface, WGI], linear infrastructures [roads, railways]), agrarian characteristics and socio-economy (farm size and density, agrarian holders [age, percent time dedicated], population density, employment), and climate (maximum temperature, the drought index SPEI, i.e., Standardized Evaporation-Precipitation Index, and Fire Weather Index of the Canadian system, FWI) (see Supplementary Material online, SM Table 1). These variables proved to be important in explaining wildfires in other areas12,13,18,19. An exploratory analysis based on Principal Components (PCA) was carried out to assess collinearity among explanatory variables (i.e., co-variates). Moreover, trends over time of the number of fires per cell for each fire size category and of each explanatory variable were assessed by means of the Mann-Kendall test (Kendall package39 in R software40). Additionally, we assessed the significance of the proportion of cells that changed over the years against the null hypothesis of an even proportion of positive/negative or no change, by a G test of goodness of fit41. The datasets generated during the current study are available in the figshare repository https://doi.org/10.6084/m9.figshare.7159727. ### Statistical approach #### Longitudinal NB and ZINB mixed models In gridded fire data, like the ones used here, fire counts are often characterized by a high number of zero observations, as no fires may occur in some years in a high number of spatial units. The excess of zeros yields a variance greater than the mean (over-dispersion)42. To handle over-dispersion in count data, a Negative Binomial (NB) distribution may be appropriate. Moreover, when in addition to over-dispersion an “excess of zeros” may be present, Zero-Inflated Negative Binomial (ZINB) models are theoretically most suitable43. ZINB models are composed of two parts: 1) a logistic regression for the “excess zeros” over and above what would be predicted by the count distribution, and 2) a regression (either Poisson or Negative Binomial) for the counts part42,44. Likewise, the number of fires per year at a given cell (i.e., fire frequency) is a type of longitudinal data with repeated measures, in which the observations are taken over time on the same subject (i.e., 10 × 10 km grid cells), thus observation are not independent. In these cases, longitudinal NB or ZINB mixed models handle dependency by incorporating an appropriate random component in the model structure45,46,47. To evaluate suitability of one modelling approach over the other the Vuong statistic48 was used. However, we did not find strong evidence to conclude that one model fitted the data better than the other49, and thus we decided to test both NB and ZINB models. #### Modelling strategy We applied longitudinal NB and ZINB mixed models to three fire size categories (small [≥1–10 ha], medium [>10–100 ha], and large [>100 ha]) in order to explain the number of fires per cell per year as a function of several co-variates fitting univariate models (Fig. 2). To account for the spatial and temporal dependencies of our data, two random components were included: (i) time (T, years) as level 1 (repeated measures), which represents the “within-subject variance”, and (ii) an identifier for each grid cell as level 2 (subjects$$)$$, which represents “between-subject variance”45. In addition, to capture changes over time in the response variable and co-variates, time (T) was additionally included as interacting factor. This was done in three different ways, either it was not considered (T = 0), or considered as a continuous (T = 1, 2, …, 30) or categorical (T = 1; T = 2; …, T = 30) factor. This permitted obtaining three types of models, respectively: (i) ”Average” models, which provided the average response and the average effects of co-variates over the whole period; (ii) “Average change” models, which provided the initial response and the initial effect of each co-variate, as well as the average rate of change (positive or negative) over the years; and (iii) “Annual change” models, similar to the previous models except that they provided coefficients of change between the initial year and any other given year separately. “Annual change” models permitted a finer analysis of the trends, which were later explored by means of a Mann-Kendall test (Fig. 2). Null models (without covariates, thus being pure random effects models) provided the baseline to assess the performance of univariate models for each co-variate. Model outputs were: intercept, slope of the time factor, and logit coefficient of the co-variates. The logit-coefficients, which were standardized by z-score and were conditional on the random effects, represent the following: in the “Average” models, the average effect of covariate (X) over the response variable (Y) when X changes between subjects and across time by one standard deviation (SD) unit47. In “Average change” and “Annual change” models, the regression coefficients of X were also conditional terms and must be interpreted along with the interaction term (X*Z [time interaction term])46. Finally, the logit coefficients were exponentiated to obtain the Odds Ratio (OR) and the Rate Ratio (RR). The OR represents the increasing or decreasing odds of the excess of zeros in the outcome (Y) for the zeros part in ZINB mixed models, and the RR represents the percentage increase or decrease in the outcome (Y) in NB mixed models and in the count part of ZINB mixed models. Fire occurrence (presence/absence) was assessed by the intercepts and coefficients of the zeros part in ZINB mixed models and fire frequency (number of fires) by the intercepts and coefficients of NB mixed models and those of the count part in ZINB mixed models. #### Model evaluation and goodness of fit Longitudinal NB and ZINB mixed models were calculated based on a Bayesian approach, using strong normal priors (V = diag (2) and nu = 4) after a sensitivity analysis. The posterior distribution of the model parameters were obtained using the Markov Chain Monte Carlo (MCMC) method, whose accuracy was assessed by density plots, autocorrelation analysis and the Geweke diagnostic50. Furthermore, we assessed the accuracy of the longitudinal NB and ZINB mixed models by linear regressions between the observed versus predicted values, using annual data (n = 13,290), proportion of zeros and counts correctly estimated, and by spatial overlays between predicted and observed values using aggregated (whole period) data. All models were calculated using the MCMCglmm package50 in R software40. Given the large number of models, we focused on those univariate NB and ZINB mixed models that had a reduction [<−10] of DIC (Deviance Information Criterion), showed significant regression coefficients of the co-variates (p < 0.05), and explained >20% of random variance relative to that of null models (pseudo-R2). For each model, we checked that Markov chains did not show autocorrelation or any patterning in parameter estimations. ## Results ### Changes in wildfires and drivers During the 30-yr period, there were 23,435 fires, which burned a total of 456,253 ha. Number of fires per cell over the study period varied between 0 and 326. Annually, several cells contained a high proportion of zeros, thus the distribution was heavily left skewed (74%, 88% and 96% of total annual counts were zeros for fires ≥1–10 ha, 10–100 ha and >100 ha, respectively). The three fire size categories accounted for 34%, 11%, 3% of all fires, and 7%, 18%, and 74% of the burned area, respectively (Fig. 3A). In the whole area, temporal trends of small fires showed a significant increase (Mann-Kendall Tau: +0.54, p < 0.001), no significant trend was obtained for medium fires, and a significant decreasing trend for large fires (Tau: −0.32, p < 0.01) (Fig. 3B). Temporal trends at cell level indicated that 26%, 4% and <1% of cells showed a significant positive trend, whereas 3.8%, 8% and 7% of cells showed a significant negative trend for each fire size, respectively (Fig. 3C). PCA of the explanatory variables showed that these could be merged in 3 main axes (SM Fig. 1). Axis 1 (38% of total variance) was positively related to areas with high solar radiation and high density of livestock, and negatively related to high elevation and slope ranges, with hazardous vegetation (shrublands and dense forests), which experienced hazardous stability and hazardous LULC changes (i.e., densification and afforestation). Axis 2 (24%) was positively related to areas with herbaceous crops, crop-trees and WAIs, and negatively related to pastures, agroforestry, large farms, agriculture abandonment and WGIs. Axis 3 (19%) was mainly related to artificial uses, development changes (i.e., artificialization), population density, unemployment and WUIs. Moreover, we found that landscapes in the study area were rather dynamic (SM Fig. 2 and SM Table 2). In general, landscapes became more hazardous over time due to the increase of hazardous stability (i.e., hazardous LULC types that remained so over time) thus accumulating fuel, the spread of open and broadleaved forests as well as pastures, which resulted in important spatial changes in forest interfaces. Other important changes were the appearance of development in former more natural areas, hence increasing WUIs; the spread of non-hazardous (i.e., herbaceous crops and agroforestry) over areas in which they were not so abundant, thus raising the WAIs; and the loss of abundance of some hazardous LULC types (i.e., coniferous forests, shrublands) in some cells due to wildfires10, which was compensated by increases in other cells not traditionally occupied by such LULC types at the beginning of the study. Regarding socioeconomic factors, we found: population increases in peri-urban areas around the main towns and cities; a reduction of employees in the primary sector, with an increase employment in the building sector; a significant reduction of farm density, together with a decrease of small farms and an increase of larger ones; a significant increase of livestock density and a reduction of agricultural machinery. Finally, in relation to climate, there was not any clear trend for any of the variables analyzed (SM Fig. 2). ### Wildfires modelling The number of fires per cell per year was small at the beginning (1979) for all fire size categories, particularly for large fires, as is reflected in the low rate ratio of the intercepts of “Average change” null NB and ZINB mixed models. According to NB mixed models, small fires tended to increase over time, medium fires did not show any significant trend, and large fires decreased, as shown by the effect of time (Table 1). Similarly, time effects in the “Average change” null ZINB mixed models indicated that the probability of small and medium fire occurrence (presence/absence) increased over time (lower excess of zeros), whereas the frequency (number of fires per cell per year) of medium and large fires (counts part of the model) slightly decreased (Table 1, Fig. 4). Based on reduction of DIC, the best univariate models were the NB mixed models for small fires, and the ZINB mixed models for medium and, particularly, for large fires (Fig. 5A, Table 2). In relation to the way “time” was included in the models, “Annual change” models were always better than the other two models for all fire sizes, although for large fires, “Average” and “Average change” models were also appropriate (Fig. 5B). Linear regression between observed versus predicted values indicated that, on average, for all co-variates and significant models, small fires were better explained (R2 = 62%), followed by medium (R2 = 35%), and, to a lesser extent, by large ones (R2 = 18%) (Fig. 6A). Moreover, from ZINB mixed models we ascertained that the number of zeros was better predicted for both medium and large fires than for small ones (91% 103% and 69%, respectively), whereas the number of fires (counts) was better predicted for small fires than for medium and large fires (89%, 57%, 19%, respectively). Spatially, there was an error of ±15 fires per cell for small fires, −9/+2 for medium fires and −10/+5 for large fires (Fig. 6B–D). ### The role of fire drivers on wildfires Some of the co-variates were able to largely reduce DIC and explain a large part of the random variance (pseudo-R2) (SM Tables 38). In general, topography, LULC types, LULC changes and agrarian characteristics were the group of variables that were more related to fires of all sizes, although with differences depending on fire size (Table 2, SM Tables 38). We found that topographic features were strongly linked to fire occurrence and frequency for all fire sizes, particularly for large fires. High fire occurrence (low excess of zeros) and frequency were linked to high elevation and slope ranges, whereas in low elevation areas with high radiation occurred the opposite. Regarding LULC types, fires of all sizes were positively related to conifer forests, which mainly explained fire occurrence of small fires, and fire frequency of medium and large ones. Similarly, shrublands were positively related to the occurrence of all fire sizes, particularly for small and medium fires. On the contrary, herbaceous crops were negatively linked to fires of all sizes, mainly large fires. Regarding to LULC changes, hazardous stability and afforestation explained positively fires of all sizes, whereas agriculture conversion showed the opposite. Concerning socio-economic variables, agrarian characteristics showed that large farms, usually under leasing, were negatively related to fires of all sizes. Population and employment variables were weakly related to fires. Finally, forest interfaces, linear structures and climate variables were marginal in explaining fires (Table 2, SM Tables 38). ### Changes through time in the role of fire drivers The changing role through time of fire drivers on wildfires was ascertained using Mann Kendall test on the logit coefficients of “Annual change” NB and ZINB mixed models (Table 2). Regarding topography (SM Fig. 7), areas with high elevation and slope ranges showed lesser probability of having fires (greater number of zeros) over time. In contrast, areas with high radiation (i.e., flatter areas) showed the opposite trend. Regarding LULC types (SM Fig. 8), coniferous forests tended to loose statistical weight (coefficients in zeros part tended to zero) for explaining fire occurrence for all fire sizes, and showed negative trends for fire frequency regardless of size. Additionally, shrublands tended to have less fire occurrence of small fires over time, not showing significant trend for medium and large ones. Conversely, herbaceous crops showed a clear trend to increase the probability of occurrence of small fires, and reduced their statistical weight for explaining fire frequency of fires of all sizes (coefficients tended to zero in the count part). More to, areas with hazardous stability and afforestation showed less probability of fire occurrence and less fire frequency, respectively, over time for small and medium fires. On the contrary, agriculture conversion showed a positive trend for the frequency of small and medium fires. Similarly, large farms reversed its sign, increasing fire frequency with time (SM Fig. 9). ## Discussion This study documents that the studied landscape of west-central Spain was rather dynamic. Wildfires were also very dynamic, as they changed in time and space, changes which were size dependent: small fires increased and spread over the landscape, while medium and large were stable or decreased. Models used to explain wildfires were also size dependent, with small fires being better explained by NB and larger fires by ZINB. Time was an important factor, and including it in the models as a categorical variable improved model performance. Not all sets of variables were able to equally explain wildfires, with topography, LULC types and their changes, and agrarian characteristics producing the best models. Within these sets, several variables had similar model performance, suggesting high collinearity between them. Additionally, the role of variables in explaining fires changed along the time, so that variables that were important at the beginning of the study period were not so at the end. Following, we discuss these findings in detail. We found that the landscape in west-central Spain was rather dynamic. The main changes were characterized by the maintenance of hazardous LULC types, an increment in open forest and grasslands, as well as spatial changes in forest interfaces. The observed LULC changes were compatible with an increase in landscape fire hazard. For example, the increment in open forests and grasslands likely contributed to enhance fuel continuity. Furthermore, the increase in forest interfaces, notably WAIs, elevated the accessibility of ignitions to forest and other flammable areas. These processes were similar to those described in other Mediterranean countries7,10,51. Such changes have often been related to increases in wildfires52,53. Wildfires were also dynamic, but this was fire size dependent. Whereas small fires increased over time and became widespread much over the whole landscape, medium fires remained stable and large fires decreased. Medium and large fires exhibited a spatial concentration, since they tended to disappear from areas in which they were present at the beginning of the study period, without spreading to new areas. The fact that the number small fires tended to increase suggests that ignitions were facilitated by a landscape with greater number of interfaces. Despite of that, and the more hazardous landscape that has evolved with time, large fires tended to decrease, which supports that firefighting services are rather efficient at deterring fires9. Notwithstanding, when conditions are severe, very large, even megafires, are erupting in the region54, supporting that there is limit to what the firefighting services can accomplish when conditions pass certain threshold, during which extant landscape can’t but facilitate fire spread. Our findings are compatible with wildfire trends across southern European Mediterranean-type countries. Indeed, wildfires have been decreasing in these countries3,6,55. This tendency, however, varies among countries and within countries. For example, while the reduction of fires and area burned in Spain, France and Italy are clear, that is not so much the case with Portugal54,55. Moreover, within Spain, while in a majority of provinces wildfires have been decreasing, in other provinces, including the ones comprehended in our study area, were increasing, mostly due to the contribution of small fires5,56. ZINB mixed models explained better the occurrence (presence/absence) of medium and large fires and the frequency of small fires. Moreover, ZINB mixed models substantially reduced the variance in the random components, after incorporating the zero-inflated part44, particularly for small and large fires. ZINB models have been previously used to model wildfires33,35,36,37; although other works that used ZINB and NB models found no evidence to favor the first over NB models34. Additionally, including time as categorical interacting factor improved model performance for all fire sizes. Notwithstanding, average models were also appropriate for large fires. Applying zero-inflated models to longitudinal data, including random effects, should be further considered to study wildfires in dynamic landscapes like the Mediterranean ones. Of the seven sets of variables used for modelling wildfires, four of them were the ones producing best model results, which is supported by the literature: topography57,58,59,60, land use-land cover20,25,61 and its changes7,18,51,62, and agrarian characteristics18,33,53,63. Other fire drivers as socio-economy, forest interfaces and linear features showed lower explanatory power for all fire sizes, which confirms other works64, even if these findings are not general12,19,22,24,65. Additionally, we could not ascertain an effect or trend of climate variables, despite changes underwent by some of them in southern Europe in recent decades66. On the other hand, there were large differences in model performance of these sets of variables, depending on fire size and modelling approach. While topographic variables had the best model performance for large fires (largest DIC reduction), other sets of variables produced similar results for small and medium fires (e.g., LULC changes types and their changes). Moreover, within a given set of variables, not any one variable produced the best results (highest DIC reduction) across fire sizes and either for NB or ZINB models. This permits arguing that this area is very complex from a perspective of wildfires61,67. Fires are ignited by people, with varying sensibilities and needs, and with changing capacities to fight fire over time. In addition, the role of the various variables changed over time. Initially, wildfires were concentrated in mountainous terrain, where farming was limited and the vegetation was dominated by conifers and other hazardous types, which underwent important changes (e.g., afforestation) over time. Wildfires were abundant in landscapes with smaller farms, and rare in areas with larger ones. By contrary, fires were limited where agricultural activities dominated, proper of the low elevation and flatter areas. With time, some variables decreased their statistical weight such as in conifer forest, or elevation, the latter being so mainly for medium and large fires. Other variables reversed their sign, as shrublands, hazardous stability and large farms. The net result was that wildfires moved to areas at lower elevation, with herbaceous crops and large size farms. Our findings support studies that found that topographic variables tended to have lower contribution to explain wildfires in the last decades16, as well as those studies that found that wildfires have decreased on forests22,65 as well as on shrublands25,68, while increased on less flammable areas22. To conclude, the initial link of wildfires to areas of certain hazardous characteristics, in which they abounded at the beginning of the study period, was lost. This process has also been observed in other Mediterranean areas of Greece52 and Portugal68, in which recent fires have moved partially to non-fire-prone areas, indicating a departure from historical burning patterns. This again supports arguing that the complex nature of this human dominated landscapes, in interaction with the ignitions generated by people, complicates understanding of how fires will evolve over time and in space. In this regard, deterministic modelling approaches to infer future fires due to climate change or other global change drivers will be plagued with uncertainties. Consequently, if projections of future fires are difficult to make in general69, in Mediterranean landscapes, like the one studied, will be even more challenging. ## References 1. 1. Westerling, A. L., Hidalgo, H. G., Cayan, D. R. & Swetnam, T. W. Warming and earlier spring increase western US forest wildfire activity. Science 313, 940–943 (2006). 2. 2. Turco, M. et al. On the key role of droughts in the dynamics of summer fires in Mediterranean Europe. Sci Rep-UK 7, 81 (2017). 3. 3. Urbieta, I. R. et al. Fire activity as a function of fire–weather seasonal severity and antecedent climate across spatial scales in southern Europe and Pacific western USA. Environ Res Lett 10, 114013 (2015). 4. 4. Krawchuk, M. A., Moritz, M. A., Parisien, M.-A., Van Dorn, J. & Hayhoe, K. Global pyrogeography: the current and future distribution of wildfire. PloS One 4, e5102 (2009). 5. 5. Rodrigues, M., San Miguel, J., Oliveira, S., Moreira, F. & Camia, A. An insight into spatial-temporal trends of fire ignitions and burned areas in the European Mediterranean countries. Journal of Earth Science and Engineering 3, 497 (2013). 6. 6. Earl, N. & Simmonds, I. Spatial and temporal variability and trends in 2001–2016 global fire activity. J Geophys Res-Atmos 123, 2524–2536 (2018). 7. 7. Moreira, F. et al. Landscape – wildfire interactions in southern Europe: Implications for landscape management. J. Environ. Manage. 92, 2389–2402 (2011). 8. 8. Syphard, A. D., Keeley, J. E., Pfaff, A. H. & Ferschweiler, K. Human presence diminishes the importance of climate in driving fire activity across the United States. Proc. Natl. Acad. Sci. USA, 201713885 (2017). 9. 9. Ruffault, J. & Mouillot, F. How a new fire-suppression policy can abruptly reshape the fire-weather relationship. Ecosphere 6, 1–19 (2015). 10. 10. Viedma, O., Moity, N. & Moreno, J. M. Changes in landscape fire-hazard during the second half of the 20th century: Agriculture abandonment and the changing role of driving factors. Agric., Ecosyst. Environ. 207, 126–140 (2015). 11. 11. Ganteaume, A. et al. A review of the main driving factors of forest fire ignition over Europe. Environ. Manage. 51, 651–662 (2013). 12. 12. Vilar del Hoyo, L., Isabel, M. P. M. & Vega, F. J. M. Logistic regression models for human-caused wildfire risk estimation: analysing the effect of the spatial accuracy in fire occurrence data. European Journal of Forest Research 130, 983–996 (2011). 13. 13. Rodrigues, M., de la Riva, J. & Fotheringham, S. Modeling the spatial variation of the explanatory factors of human-caused wildfires in Spain using geographically weighted logistic regression. Appl Geogr 48, 52–63 (2014). 14. 14. Chas-Amil, M., Prestemon, J., McClean, C. & Touza, J. Human-ignited wildfire patterns and responses to policy shifts. Appl Geogr 56, 164–176 (2015). 15. 15. Parisien, M.-A. et al. An analysis of controls on fire activity in boreal Canada: comparing models built with different temporal resolutions. Ecol. Appl. 24, 1341–1356 (2014). 16. 16. Oliveira, S., Oehler, F., San-Miguel-Ayanz, J., Camia, A. & Pereira, J. M. C. Modeling spatial patterns of fire occurrence in Mediterranean Europe using Multiple Regression and Random Forest. For. Ecol. Manage. 275, 117–129 (2012). 17. 17. Ganteaume, A. & Jappiot, M. What causes large fires in Southern France. For. Ecol. Manage. 294, 76–85 (2013). 18. 18. Martínez-Fernández, J., Chuvieco, E. & Koutsias, N. Modelling long-term fire occurrence factors in Spain by accounting for local variations with geographically weighted regression. Nat Hazard Earth Sys 13, 311–327 (2013). 19. 19. Rodrigues, M. & de la Riva, J. An insight into machine-learning algorithms to model human-caused wildfire occurrence. Environ. Model. Software 57, 192–201 (2014). 20. 20. Koutsias, N., Martínez-Fernández, J. & Allgöwer, B. Do factors causing wildfires vary in space? Evidence from geographically weighted regression. Gisci Remote Sens 47, 221–240 (2010). 21. 21. Zumbrunnen, T. et al. Human impacts on fire occurrence: a case study of hundred years of forest fires in a dry alpine valley in Switzerland. Reg Environ Change 12, 935–949 (2012). 22. 22. Salvati, L. & Ranalli, F. ‘Land of Fires’: Urban Growth, Economic Crisis, and Forest Fires in Attica, Greece. Geogr Res 53, 68–80 (2015). 23. 23. Rodrigues, M., Jiménez, A. & de la Riva, J. Analysis of recent spatial–temporal evolution of human driving factors of wildfires in Spain. Nat. Hazards 84, 2049–2070 (2016). 24. 24. Vilar, L. et al. Multitemporal modelling of socio-economic wildfire drivers in Central Spain between the 1980s and the 2000s: comparing generalized linear models to machine learning algorithms. PLoS One 11, e0161344 (2016). 25. 25. Costa, L., Thonicke, K., Poulter, B. & Badeck, F.-W. Sensitivity of Portuguese forest fires to climatic, human, and landscape variables: subnational differences between fire drivers in extreme fire years and decadal averages. Reg Environ Change 11, 543–551 (2011). 26. 26. Pazienza, P. & Beraldo, S. Adverse effects and responsibility of environmental policy: the case of forest fires. Corporate Social Responsibility and Environmental Management 11, 222–231 (2004). 27. 27. Michetti, M. & Pinar, M. Forest fires in Italy: An econometric analysis of major driving factors. CMCC Research Papers RP0152, 1–39 (2013). 28. 28. Pereira, P., Turkman, K. F., Turkman, M. A. A., Sá, A. & Pereira, J. M. Quantification of annual wildfire risk; A spatio-temporal point process approach. Statistica 73, 55 (2013). 29. 29. Turkman, K., Turkman, M. A., Pereira, P., Sá, A. & Pereira, J. Generating annual fire risk maps using Bayesian hierarchical models. Journal of Statistical Theory and Practice 8, 509–533 (2014). 30. 30. Michetti, M. & Pinar, M. Forest fires across Italian regions and implications for climate change: a panel data analysis. Environmental and Resource Economics, 1–40 (2018). 31. 31. Amaral-Turkman, M., Turkman, K., Le Page, Y. & Pereira, J. Hierarchical space-time models for fire ignition and percentage of land burned by wildfires. Environ. Ecol. Stat. 18, 601–617 (2011). 32. 32. Serra, L., Saez, M., Juan, P., Varga, D. & Mateu, J. A spatio-temporal Poisson hurdle point process to model wildfires. Stochastic Environmental Research and Risk Assessment 28, 1671–1684 (2014). 33. 33. Boubeta, M., Lombardía, M. J., Marey-Pérez, M. F. & Morales, D. Prediction of forest fires occurrences with area-level Poisson mixed models. J. Environ. Manage. 154, 151–158 (2015). 34. 34. Mann, M. L. et al. Incorporating anthropogenic influences into fire probability models: Effects of human activity and climate change on fire activity in California. PLoS One 11, e0153589 (2016). 35. 35. Arienti, M. C., Cumming, S. G., Krawchuk, M. A. & Boutin, S. Road network density correlated with increased lightning fire incidence in the Canadian western boreal forest. Int. J. Wildland Fire 18, 970–982 (2010). 36. 36. Krawchuk, M. A. & Cumming, S. G. Disturbance history affects lightning fire initiation in the mixedwood boreal forest: observations and simulations. For. Ecol. Manage. 257, 1613–1622 (2009). 37. 37. Krawchuk, M. A., Cumming, S. G. & Flannigan, M. D. Predicted changes in fire weather suggest increases in lightning fire initiation and future area burned in the mixedwood boreal forest. Clim. Change 92, 83–97 (2009). 38. 38. Krawchuk, M. A. & Cumming, S. G. Effects of biotic feedback and harvest management on boreal forest fire activity under climate change. Ecol. Appl. 21, 122–136 (2011). 39. 39. McLeod, A. I. Kendall: Kendall rank correlation and Mann-Kendall trend test. R package version 2.2 (2011). 40. 40. R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing. Vienna, Austria. http://www.R-project.org/ (2013). 41. 41. McDonald, J. H. The Handbook of Biological Statistics. http://www.biostathandbook.com/gtestgof.html (2007). 42. 42. Yau, K. K., Wang, K. & Lee, A. H. Zero‐inflated negative binomial mixed regression modeling of over‐dispersed count data with extra zeros. Biometrical J 45, 437–452 (2003). 43. 43. Atkins, D. C., Baldwin, S. A., Zheng, C., Gallop, R. J. & Neighbors, C. A tutorial on count regression and zero-altered count models for longitudinal substance use data. Psychol Addict Behav 27, 166 (2013). 44. 44. Zeileis, A., Kleiber, C. & Jackman, S. Regression models for count data in R. J Stat Softw 27, 1–25 (2008). 45. 45. Lee, A. H., Wang, K., Scott, J. A., Yau, K. K. & McLachlan, G. J. Multi-level zero-inflated Poisson regression modelling of correlated count data with excess zeros. Stat Methods Med Res 15, 47–61 (2006). 46. 46. Kwok, O.-M. et al. Analyzing longitudinal data with multilevel models: An example with individuals living with lower extremity intra-articular fractures. Rehabil Psychol 53, 370 (2008). 47. 47. Gibbons, R. D., Hedeker, D. & DuToit, S. Advances in analysis of longitudinal data. Ann. Rev. Clin. Psych. 6, 79–107 (2010). 48. 48. Vuong, Q. H. Likelihood ratio tests for model selection and non-nested hypotheses. Econometrica 57, 307–333 (1989). 49. 49. Silva, J. M. S., Tenreyro, S. & Windmeijer, F. Testing competing models for non-negative data with many zeros. Journal of Econometric Methods 4, 29–46 (2015). 50. 50. Hadfield, J. D. MCMC methods for multi-response generalized linear mixed models: the MCMCglmm R package. J Stat Softw 33, 1–22 (2010). 51. 51. Fernandes, P. M. et al. The dynamics and drivers of fuel and fire in the Portuguese public forest. J. Environ. Manage. 146, 373–382 (2014). 52. 52. Koutsias, N. et al. Where did the fires burn in Peloponnisos, Greece the summer of 2007? Evidence for a synergy of fuel and weather. Agr Forest Meteorol 156, 41–53 (2012). 53. 53. Chas-Amil, M. L., Touza, J. & Garcia-Martinez, E. Forest fires in the wildland-urban interface: A spatial analysis of forest fragmentation and human impacts. Appl Geogr 43, 127–137 (2013). 54. 54. San-Miguel-Ayanz, J., Moreno, J. M. & Camia, A. Analysis of large fires in European Mediterranean landscapes: Lessons learned and perspectives. For. Ecol. Manage. 294, 11–22 (2013). 55. 55. Turco, M. et al. Decreasing Fires in Mediterranean Europe. PLoS One 11, e0150663 (2016). 56. 56. Jiménez-Ruano, A., Mimbrero, M. R. & de la Riva Fernández, J. Exploring spatial–temporal dynamics of fire regime features in mainland Spain. Nat Hazard Earth Sys 17, 1697 (2017). 57. 57. Catry, F. X., Rego, F. C., Bacao, F. & Moreira, F. Modeling and mapping wildfire ignition risk in Portugal. Int. J. Wildland Fire 18, 921–931 (2009). 58. 58. Romero-Calcerrada, R., Barrio-Parra, F., Millington, J. D. A. & Novillo, C. J. Spatial modelling of socioeconomic data to understand patterns of human-caused wildfire ignition risk in the SW of Madrid (central Spain). Ecol. Model. 221, 34–45 (2010). 59. 59. Viedma, O., Angeler, D. G. & Moreno, J. M. Landscape structural features control fire size in a Mediterranean forested area of central Spain. Int. J. Wildland Fire 18, 575–583 (2009). 60. 60. Fernandes, P. M., Monteiro-Henriques, T., Guiomar, N., Loureiro, C. & Barros, A. M. G. Bottom-Up Variables Govern Large-Fire Size in Portugal. Ecosystems 19, 1362–1375 (2016). 61. 61. Moreno, J. M., Viedma, O., Zavala, G. & Luna, B. Landscape variables influencing forest fires in central Spain. Int. J. Wildland Fire 20, 678–689 (2011). 62. 62. Fernandes, P. M., Loureiro, C., Magalhães, M., Ferreira, P. & Fernandes, M. Fuel age, weather and burn probability in Portugal. Int. J. Wildland Fire 21, 380–384 (2012). 63. 63. Ortega, M., Saura, S., González-Avila, S., Gómez-Sanz, V. & Elena-Rosselló, R. Landscape vulnerability to wildfires at the forest-agriculture interface: half-century patterns in Spain assessed through the SISPARES monitoring framework. Agroforest Syst 85, 331–349 (2012). 64. 64. Oliveira, S., Pereira, J. M. C., San-Miguel-Ayanz, J. & Lourenço, L. Exploring the spatial patterns of fire density in Southern Europe using Geographically Weighted Regression. Appl Geogr 51, 143–157 (2014). 65. 65. Gómez Nieto, I. G., Martín, P. & Salas, F. J. Análisis del régimen de incendios forestales y su relación con los cambios de uso del suelo en la Comunidad Autónoma de Madrid (1989–2010). Geofocus: Revista Internacional de Ciencia y Tecnología de la Información Geográfica 12 (2015). 66. 66. Venäläinen, A. et al. Temporal variations and change in forest fire danger in Europe for 1960–2012. Nat. Hazards Earth Syst. Sci. 14, 1477–1490 (2014). 67. 67. Viedma, O., Moreno, J. M. & Rieiro, I. Interactions between land use/land cover change, forest fires and landscape structure in Sierra de Gredos (Central Spain). Environ. Conserv. 33, 212–222 (2006). 68. 68. Marques, S. et al. Characterization of wildfires in Portugal. European Journal of Forest Research 130, 775–784 (2011). 69. 69. Harris, R. M., Remenyi, T. A., Williamson, G. J., Bindoff, N. L. & Bowman, D. M. Climate–vegetation–fire interactions and feedbacks: trivial detail or major barrier to projecting the future of the Earth system? Wires Clim Change 7, 910–931 (2016). ## Acknowledgements The research leading to these results has received funding from the European Union’s Seventh Framework Programme (FP7/2007-2013), Project FUME, grant agreement no. 243888, and from FOCCLIM project (CGL2016-78357-R) funded by the Spanish Ministerio de Economía y Competitividad. The authors would like to thank Olivier Nuñez, Santiago Allende and Adrian Barnett for their fruitful comments. ## Author information Authors ### Contributions O.V. Performed research, analyzed data and wrote the paper. I.R.U. Performed research and wrote the paper. J.M.M. Conceived and designed the study and wrote the paper. ### Corresponding author Correspondence to O. Viedma. ## Ethics declarations ### Competing Interests The authors declare no competing interests. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Viedma, O., Urbieta, I.R. & Moreno, J.M. Wildfires and the role of their drivers are changing over time in a large rural area of west-central Spain. Sci Rep 8, 17797 (2018). https://doi.org/10.1038/s41598-018-36134-4 • Accepted: • Published: ### Keywords • West-central Spain • Zero-inflated Negative Binomial (ZINB) • LULC Types • Land Use/land Cover (LULC) • Fire Drivers • ### Assessing impacts of future climate change on extreme fire weather and pyro-regions in Iberian Peninsula • T. Calheiros • , M.G. Pereira •  & J.P. Nunes Science of The Total Environment (2021) • ### Influence of climatic factor of changes in forest fire danger and fire season length in Turkey • Mertol Ertugrul • , Tugrul Varol • , Halil Baris Ozel • , Mehmet Cetin •  & Hakan Sevik Environmental Monitoring and Assessment (2021) • ### Mapping forest fire susceptibility using spatially explicit ensemble models based on the locally weighted learning algorithm • Tran Thi Tuyen • , Abolfazl Jaafari • , Hoang Phan Hai Yen • , Trung Nguyen-Thoi • , Tran Van Phong • , Huu Duy Nguyen • , Hiep Van Le • , Tran Thi Mai Phuong • , Son Hoang Nguyen • , Indra Prakash •  & Binh Thai Pham Ecological Informatics (2021) • ### Sustainable Land Management, Wildfire Risk and the Role of Grazing in Mediterranean Urban-Rural Interfaces: A Regional Approach from Greece • Andrea Colantoni • , Gianluca Egidi • , Giovanni Quaranta • , Roberto D’Alessandro • , Sabato Vinci • , Rosario Turco •  & Luca Salvati Land (2020) • ### Assessing the biophysical and social drivers of burned area distribution at the local scale • Sandra Oliveira •  & José Luís Zêzere Journal of Environmental Management (2020)
## Semibounded Unitary Representations of Double Extensions of Hilbert--Loop Groups    [PDF] Karl-Hermann Neeb A unitary representation of a, possibly infinite dimensional, Lie group $G$ is called semibounded if the corresponding operators $i\dd\pi(x)$ from the derived representation are uniformly bounded from above on some non-empty open subset of the Lie algebra $\g$ of $G$. We classify all irreducible semibounded representations of the groups $\hat\cL_\phi(K)$ which are double extensions of the twisted loop group $\cL_\phi(K)$, where $K$ is a simple Hilbert--Lie group (in the sense that the scalar product on its Lie algebra is invariant) and $\phi$ is a finite order automorphism of $K$ which leads to one of the 7 irreducible locally affine root systems with their canonical $\Z$-grading. To achieve this goal, we extend the method of holomorphic induction to certain classes of Fr\'echet-Lie groups and prove an infinitesimal characterization of analytic operator-valued positive definite functions on Fr\'echet--BCH--Lie groups. View original: http://arxiv.org/abs/1205.5201
Martedì 16 Aprile 2019, alle ore 15 precise, presso la sala conferenze dell’IMATI-CNR di Pavia la d.ssa Alessandra Pluda, Università di Pisa terrà un seminario dal titolo: Long Time Existence of Solutions to an Elastic Flow of Networks nell'ambito del Seminario di Matematica Applicata (IMATI-CNR e Dipartimento di Matematica, Pavia), ______________________ Abstract. n this talk we will consider the $L^2$-gradient flow of the elastic energy of networks in $\mathbb{R}^2$ which leads to a fourth order evolution law with non-trivial nonlinear boundary conditions. Hereby we study configurations consisting of a finite union of curves that meet in triple junctions (and may or may not have endpoints fixed in the plane). We investigate the long time behaviour of solutions to this flow. Starting from a suitable initial network of class $W_p^{4-\nicefrac{4}{p}}$ with $p\in (5,10)$ we prove that the flow exists globally in time or at least one of the following happens: as the time approaches the maximal time of existence, the length of at least one curve tends to zero or at one of the triple junctions of the network all the angles between the concurring curves tend to zero or to $\pi$. This is joint work with Harald Garcke and Julia Menzel.
# C++ Makefile, Cross-Compiling I have written one previous post on cross compiling, which used the scons build system. However, scons is not universally installed, and has some issues with larger projects. Here, I will use the same strategy for cross-compiling using mingw-w64, but with the generic makefile that has been developed over the last several posts. We will incorporate cross-compiling as a separate build target. That way, to make the standard binary, we will use make, and to cross-compile, we will use make BUILD=win32 and make BUILD=win64 for 32-bit and 64-bit binaries. The main meat of this has already been accomplished when we make the separate build targets, and so we will just need to add a bit more flexibility. Mingw-w64 contains versions of gcc that target windows systems, along with all the dynamic libraries needed to run on windows. So long as we switch compilers to the appropriate mingw-w64 provided executable, we will produce windows binaries instead of native binaries. # In win32.inc CXX = i686-w64-mingw32-g++ AR = i686-w64-mingw32-ar We want to follow standard conventions for filenames on windows. Executables should end in .exe, shared libraries should be named MyLibrary.dll instead of libMyLibrary.so. In addition, because windows does not have the equivalent of RPATH, shared libraries are usually installed either system-wide or placed into the same folder as the executable. Since it is ridiculous to assume that everybody wants to install a program system-wide, we will be placing the shared libraries in the same folder as the executable. We define a few variables to describe the output file names. # In win32.inc EXE_NAME = bin/$(1).exe SHARED_LIBRARY_NAME =$(patsubst lib%,bin/%.dll, $(1)) STATIC_LIBRARY_NAME =$(patsubst lib%,lib/%.dll.a,$(1)) Now, we modify our targets and our build rules to make calls to these functions. Here are shown the modifications for the shared libraries. The rules for the executables and the static libraries are done in a similar manner. # In Makefile SHARED_LIBRARY_OUTPUT =$(foreach lib,$(LIBRARY_FOLDERS),$(call SHARED_LIBRARY_NAME,$(lib)))$(call SHARED_LIBRARY_NAME,lib%): build/$(BUILD)/$(call SHARED_LIBRARY_NAME,lib%) .build-target mkdir -p $(@D) cp -f$< $@ build/$(BUILD)/$(call SHARED_LIBRARY_NAME,lib%):$$(call library_os_files,%) mkdir -p$(@D) $(CXX)$(ALL_LDFLAGS) $^ -shared$(SHARED_LDLIBS) -o $@ Finally, getting rid of a minor annoyance. Whenever gcc (or mingw-w64, which is based on gcc) is producing an object file for windows, it issues a warning if the -fPIC flag is present. This is because all code on this target is position-independent, and therefore -fPIC does nothing. Rather than just ignoring the flag, a warning is produced. Therefore, we move the flag into a variable, so that our build target can disable its use. # In win32.inc PIC_FLAG = # In Makefile build/$(BUILD)/build/%.os: %.cc mkdir -p $(@D)$(CXX) -c $(PIC_FLAG)$(ALL_CPPFLAGS) $(ALL_CXXFLAGS)$< -o \$@ The full version of this makefile can be found on github.
When we have a square matrix of size $n$, $A$, and we multiply it by a vector $\vect{x}$ from $\complex{n}$ to form the matrix-vector product (Definition MVP), the result is another vector in $\complex{n}$. So we can adopt a functional view of this computation — the act of multiplying by a square matrix is a function that converts one vector ($\vect{x}$) into another one ($A\vect{x}$) of the same size. For some vectors, this seemingly complicated computation is really no more complicated than scalar multiplication. The vectors vary according to the choice of $A$, so the question is to determine, for an individual choice of $A$, if there are any such vectors, and if so, which ones. It happens in a variety of situations that these vectors (and the scalars that go along with them) are of special interest. We will be solving polynomial equations in this chapter, which raises the specter of complex numbers as roots. This distinct possibility is our main reason for entertaining the complex numbers throughout the course. You might be moved to revisit Section CNO and Section O. Section EE Eigenvalues and Eigenvectors Section PEE Properties of Eigenvalues and Eigenvectors Section SD Similarity and Diagonalization
LightSource¶ Attributes¶ LightSource(win[, pos, diffuseColor, …]) Class for representing a light source in a scene. Details¶ class psychopy.visual.LightSource(win, pos=(0.0, 0.0, 0.0), diffuseColor=(1.0, 1.0, 1.0), specularColor=(1.0, 1.0, 1.0), ambientColor=(0.0, 0.0, 0.0), colorSpace='rgb', lightType='point', attenuation=(1, 0, 0))[source] Class for representing a light source in a scene. Only point and directional lighting is supported by this object for now. The ambient color of the light source contributes to the scene ambient color defined by ambientLight. Warning This class is experimental and may result in undefined behavior. Parameters • win (~psychopy.visual.Window) – Window associated with this light source. • pos (array_like) – Position of the light source (x, y, z, w). If w=1.0 the light will be a point source and x, y, and z is the position in the scene. If w=0.0, the light source will be directional and x, y, and z will define the vector pointing to the direction the light source is coming from. For instance, a vector of (0, 1, 0, 0) will indicate that a light source is coming from above. • diffuseColor (array_like) – Diffuse light color. • specularColor (array_like) – Specular light color. • ambientColor (array_like) – Ambient light color. • colorSpace (str) – Colorspace for diffuse, specular, and ambient colors. • attenuation (array_like) – Values for the constant, linear, and quadratic terms of the lighting attenuation formula. Default is (1, 0, 0) which results in no attenuation. property ambientColor Ambient color of the material. property ambientRGB Diffuse color of the material. property attenuation Values for the constant, linear, and quadratic terms of the lighting attenuation formula. property diffuseColor Diffuse color of the material. property diffuseRGB Diffuse color of the material. property lightType Type of light source, can be ‘point’ or ‘directional’. property pos Position of the light source in the scene in scene units. property specularColor Specular color of the material. property specularRGB Diffuse color of the material.
# How to formulate this problem? I have a matrix in the size of $$S \in \mathbb{R}^{M\times N}$$ with only binary values $$0..1$$. I want to select $$m rows from $$S$$ and sum the $$m$$ rows to get a new vector $$v$$. I wish $$v$$ to be close to the uniform distribution. Currently, I am thinking of normalizing $$v$$ and maximize the entropy of the $$v$$. For example, the matrix $$S$$ is like: [[0,1,1,1,0], [1,0,0,0,1], [1,1,1,1,1], [0,1,1,0,0]] I want to select 2 rows from $$S$$. Say I selected first and second rows and the summation of the two rows is: [1,1,1,1,1]. And I normalize $$v$$ into [0.2,0.2,0.2,0.2,0.2], which is closest to the uniform distribution compared with other combinations. But I don't know how to formulate this question into the standard mixed-integer linear program form. Thank you if anyone can give some insights. Let $$x_1,\dots,x_M$$ be binary variables, and let $$y_1,\dots,y_N$$, $$z$$ and $$w$$ be general (nonnegative) variables. (They will all turn out to be integers, but you do not need to declare them as integers.) Consider the following constraint set:\begin{align*} \sum_{i=1}^{M}x_{i} & =m\\ x^{\prime}S & =y\\ y_{i} & \ge z\quad\forall i\\ y_{i} & \le w\quad\forall i. \end{align*} $$y$$ is your row sum, and $$w$$ and $$z$$ will be the largest and smallest elements of that sum respectively. One way to achieve what I think you mean by uniformity is to minimize $$w-z$$, packing the row sum elements into the smallest range possible. • Thank you! This might be one heuristic. But say we have two vectors, [1,0,0,1,0] and [1,1,1,1,0]. These two have the same $w-z$ value but the first is closer to the uniform distribution. From a statistical view, we usually use entropy. For a normalized vector, [0.2,0.2,0.2,0.2,0.2]. The entropy is $\sum_{i=1}^M 0.2log(0.2)$. The larger the value is, the distribution is closer to the uniform distribution. Is there any way to set this objective function in the mixed integer programming solver? Aug 25 '20 at 16:18 • This looks similar to Shannon entropy, except that (as I understand it) that deals with probabilities. Assuming this is supposed to follow the Shannon formula, what do you do with $p\log_2(p)$ when $p=0$? Aug 25 '20 at 20:08 Entropy maximization will not result in a MILP. The continuous relaxation is a convex (asymmetric cone) conic optimization problem. Entropy can be maximized, subject to linear and integer constraints (unlike @prubin 's answer, I think you need to declare variables in this problem as binary (integer)) using a convex optimization tool such as CVX, YALMIP, CVXPY, or CVXR. Call and make use of Mosek's mized-integer native exponential cone capability. to solve the problem. You can make use of CVX's entr function, and similar functions in the other convex optimization tools. If you do not have access to Mosek, but do have access to Gurobi, you can install CVXQUAD under CVX, including its exponential.m replacement, and call Gurobi to solve an MISOCP produced from CVXQUAD generating 2 by 2 LMIs which CVX converts to SOCP constraints before passing them on to the solver. See http://ask.cvxr.com/t/cvxquad-how-to-use-cvxquads-pade-approximant-instead-of-cvxs-unreliable-successive-approximation-for-gp-mode-log-exp-entr-rel-entr-kl-div-log-det-det-rootn-exponential-cone-cvxquads-quantum-matrix-entropy-matrix-log-related-functions/5598 .
# Suspension Calculator Wind can be detrimental to a bridge. Why We Love Linear Suspension. ReadyLIFT is for consumers that want lift kit that won't compromise fuel economy. RideTech has developed bolt-on coilover suspension systems that already includes correct rate springs. 928-362-1486 [email protected] 00 USD = 1,185. Calculate overtime pay for monthly-rated employees covered under Part IV of the Employment Act. A good suspension system is key in any type of automobile, but especially in off-road applications. Our 4-Link Coil-Over Rear Suspension Systems for Ford’s Mustang and Fairlane cars are designed to provide exceptional traction, stability, handling, control, safety and performance, making your Ford muscle car a contender. Machine Design Equations, Applications and Calculators. A suspension should be designed to use about 1/4 to 1/3 of it's maximum travel under normal load. SusProg3D is a complete 3D software package for the design, setup, evaluation and VISUALIZATION of race and road car SUSPENSION SYSTEMS Now with support for 30 different suspension types. It is now the fifth longest in the world with a centre span of 1,410 metres (4,626 ft) and a total length of 2,220 metres (7,283 ft). DOUBLE A-ARM (UNEQUAL LENGTH, NON PARALLEL WISHBONE). The chassis outline (blue lines) can be dragged up and down and rolled from side to side with the mouse (or moved with the scroll bars). For a given pair of road profiles, the resultant roll and bounce of the chassis can be studied and the suspension parameters tuned for optimal performance. Originally designed specifically to account for the greater bottom bracket drop of 29ers, the CVA pivot and linkage layout is ideal for the range of large. 0 adjustments in each direction. The geometry in their formula eliminates any marginal grey area by averaging results against one-another from both directions. Military Retirement. Unsprung Weight. A retirement insurance benefits (RIB) beneficiary who attains full retirement age (FRA) prior to April 30, 2016 and requests voluntary suspension prior to April 30, 2016, may request benefit reinstatement for any month in the suspension period. This calculator will determine the Weigh and Balance of a trailer or similar application. You enter heights above the ground and distance ahead of the front axle for up to 6 holes for all 4 brackets for your car, or any number of cars. For over 5 years Auburn Baja has run a double wishbone (unequal length, non-parallel) rear suspension with an integrated rear steer point. 81 changelog Sep DLC v2. Use calculators 401(k) retirement calculator Retirement savings calculator says the suspension of the IRS tax transcript verification has impacted business, forcing them to get proof of income. The work RVU calculator provides quick analysis of work relative value units associated with CPT® and HCPCS Level II codes. Today, we’re a manufacturer known for leading the industry with new, innovative and winning engineered parts for racing and performance suspension, exhaust, brakes and cooling. Most child support orders are standardized, meaning a set amount is due each month. Each insert is implemented by constraining two square profiles with four rubber inserts. I've been lucky enough to be involved in motorsport since 1984. Suspension bridges might seem complicated, but for spanning long distances they can also be the most economical—they require less material per foot than would a simpler beam bridge. Our auto repair estimate tool shows you parts and labor quotes from service shops near you. For this reason, off-road racing teams spend considerable amounts of time on damper development because they are a huge part of the performance package. Use the following online calculator to find your current semester GPA and overall GPA. Use this calculator to see what the chassis, rake, forks and triple clamps have to do with your magical "trail" figure. A typical suspension bridge can be seen below. Suspension frequency is defined as the undamped natural frequency of the body in ride. Ride heights can be custom tailored to you taste during installation. Older woman ripping up her 2020 RMD Notice. $1,336 -$1,877. Friedline Inc. , a rate of 4. Depending on the vehicle, these kits include springs, shocks, control arms, radius rods, and other attachment hardware to provide maximum lift, up to a whopping 10". Since both the 250 mg and 500 mg tablets of AUGMENTIN contain the same amount of clavulanic acid (125 mg, as the potassium salt), two 250 mg tablets are. In each well, you’re going to have 2000 cells in 100uL (0. Devinci and Salsa both use the Split Pivot design. The project is open source and in the language of Microsoft Visual C#, making it free for anyone to program their own updates and for people to share. There are 49 child support agencies across California that establish and enforc e child support and medical support orders. Suspension calculator solving for torsional or rigidity modulus given mean coil diameter, wire diameter, spring rate constant and number of active coils. It's car math made simple. Home > Currency Calculator Exchange Rate US Dollar to South Korean Won Converter. percentage grams (g) kilograms (kg) ounces (oz) pounds (lbs). Performance Suspension Guide Find the base set up for your bike and optimize it for you. The cost to Dig a Trench starts at $9. ” - Devinci Cycles. Anti-squat in the 4 link calculator, when used to model the front suspension, could/should be labeled pro-dive. The chassis outline (blue lines) can be dragged up and down and rolled from side to side with the mouse (or moved with the scroll bars). To calculate suspension frequency for an individual corner, you need Mass and Spring rate:. To determine the length of the suspension, the Hearing Officer starts out with a base point suspension period of 6 months, and then reviews your driving history and behaviors for aggravators and mitigators. CB Performance Racing Products has VW Performance, Electronic Fuel Injection Systems, Turbos, CNC Ported Cylinder Heads, dune buggy parts, dropped spindles, Weber, Dellorto, crankshafts, connecting rods, complete turnkey engines and disc brake kits for aircooled volkswagens. Today’s bikes come with pretty good suspension and the average rider would only need to dial in the compression and rebound, sprung correctly for rider’s weight, and proper sag settings. View a wide selection of Mountain Bikes and other great items on KSL Classifieds. For whatever your motorsport related math problem maybe racingaspirations. Most vehicles in Asia are. Sway-A-Way has greatly expanded the Tech Room with new tools and resources to help you find the right suspension components and accessories for your application. Values of Higher Order can also be calculated. Download 4 link suspension calculator for free. This will positively affect the characteristics of the motorcycle to increase wheel travel, comfort, and progressiveness. Whether taking a vehicle through a slow crawl over rocks where wheel travel and articulation is very important, or through the whoops which was our case with Project Storm Trooper. Many athletes and bodybuilders will even inject Testosterone Suspension multiple times per day in order to achieve far more stable and steady optimal blood plasma levels. 0 adjustments in each direction. Welcome, from sunny Australia! THE SUSPENSION PAGE. I have mixed feelings about this purchase. You can actually tweak your bike's suspension by adjusting where the front and back wheel sits on the bike. Traxion Suspension ph 52 33 2469 3870; Nepal HONEY HUNTER ENVIROTECH CYCLES ph 00977-51-525978; New Zealand Blue Shark Enterprises [email protected] Balance Motorsport are based in West Sussex BN5 9XH - happy to provide mail order or selected installation. Calculate your earnings and more. Pharmacology, adverse reactions, warnings and side effects. Gubbarey Mountain Bike Bicycle Electric Scooter Shock Absorber Bumper Spring Components 750Lbs/ in Rear Suspension MTB Shocks with Multi-Purpose Repair Toolkit Vehicles Two-Wheelers 2 price ₹ 3,479. on suspension behavior, we need to compare this coefficient with the mass. 74; Peru BIKE TECH ph (01) 4442566; Philippines Dan's Bike Shop ph 63 3 4434 2402; Poland Gregorio ph 0048 33 854 48 02. Recent price changes have made Öhlins an even more attractive option when looking to make suspension upgrades. This allows us to specify the wheel location (both laterally and longitudinally), which will then calculate the track, wheelbase and wishbone link lengths. Rancho engineers and manufactures. Ibuprofen Nurofen Dose Calculator 100mg in 5ml, enter childs weight in kg and get automatic calculation of the dose in mls and the frequency of the dose as well. For this reason, off-road racing teams spend considerable amounts of time on damper development because they are a huge part of the performance package. Units used are in, lbs or mm and Newton's perspective. This calculator should be used to determine exactly where youíre spending income. It'll take us three non-consecutive articles to get there, but it's a worthy system to model. Estimation of chlorophyll a concentration of the suspension. It came with all the arms. Academic Suspension; Apply for Graduation; Chancellor's and Dean's List; Commencement Ceremony; DegreeWorks and CAPP; Diploma Status; GPA Calculator; Grade Elections for Spring 2020; Grade Replacement; Order Transcripts. different reagents would require different percentages but. This calculator will determine your spring rate. Discussions: 641,303 Messages: 22,284,869 Members: 272,799. A typical pair of sports bike fork springs are about 8. Use our online tool in order to calculate the trade-in value for your car or SUV and find your new or certified pre-owned Land Rover SUV today. Use the following online calculator to find your current semester GPA and overall GPA. Weight-based medication dosing. If you can’t find the answer to your question please don’t hesitate […]. And, I have referenced how to calculate tension of main cable and hanger cable as instructed in Demo of RM Bridge V8, and I obtained a deflection of 2. The cost to Dig a Trench starts at$9. There are 49 child support agencies across California that establish and enforc e child support and medical support orders. Pictured is a Big Dog K9 which has 39 degrees of rake, 3 degree raked trees, and 12" over forks. It can still help when towing a trailer, but not as much as a WD system. The Humber Bridge, UK is seen below. Suspension frequency is defined as the undamped natural frequency of the body in ride. This calculator will display the results in two forms. Many athletes and bodybuilders will even inject Testosterone Suspension multiple times per day in order to achieve far more stable and steady optimal blood plasma levels. 08 DUI Legislation. I just got a new axle, set up with a 3 link. 00 or just £93. US: English; Change region; Sign in. Our customers are very familiar with our Remote Reservoir which is known for its excellent performance. 4ft Cross Tee. “After 5000 miles the difference is truly amazing. Formulas are provided for each calculator. Various provisions under the Planning Act were differently impacted by the suspension. The streetbox! will be the solution for you and your bike. OWC oscillating suspension assure a high shock-absorbing level due to their special shape featuring the interaction of four elastic torsional elements. The project is open source and in the language of Microsoft Visual C#, making it free for anyone to program their own updates and for people to share. We are designing the front suspension, and will input data for the LH side. All Rights Reserved. Education software downloads - 4 Link Calculator by Performance Trends Inc and many more programs are available for instant and free download. Just change the view (Y on the xb1 controller) to find it. A suspension of child support from one month to the next is highly unlikely. 40, flavors, glycerin, high fructose corn syrup, microcrystalline cellulose and carboxymethylcellulose sodium, purified water, sodium benzoate, sorbitol solution, sucralose, xanthan gum. The duration of a true revolution of the Earth around its axis - a "sideric day" - is about 4 minutes shorter than the average calender day. Grading Chart A= 4. The steps below will help you to find an approximate spring rate for your coil-over application. Adults: 10 mg (10 mL of suspension or 2 medicine measures) taken 3 times per day, 15 to 30 minutes before meals and, if necessary, before retiring. The Roll Center Calculator lets you input, view, save, analyze and compare either Double A Frame or McPherson Strut front suspension geometry. The concentration of particles in the suspension is a very important parameter in the inspection process and must be closely controlled. Features include: Forza 3 and Forza 4 tunes Imperial and Metric support Suspension, differential and gearing results Tunes for Grip/Normal/Drift Save tunes to phones storage for editing anytime v2. No additional details found for the Weight-based medication dosing calculator. Please enter values for stock amount and the desired target concentration. The dyno data below at suspension velocities of 550 in/sec is very rare and important data in quantifying the effect of shim stack structure distortions on suspension performance. explore fox suspension for jeep. Find what others are running for wheels, tires, suspension, lighting, and more. The duration of a true revolution of the Earth around its axis - a "sideric day" - is about 4 minutes shorter than the average calender day. That makes a total of 36 million pounds over the entire span. Absorption of Amoxicillin Oral Suspension Sugar Free is unimpaired by food. It also tells you how many clicks of rebound and compression the company recommends for your bike. Performance Air Suspension Air Lift Performance is the performance division of Air Lift Company and produces full air ride and air management systems for lowered and performance vehicles. With the first two columns now full; use the below equation to calculate the installation ratio to fill the final column. The zoom extents button centers the model and zooms it to the point you can see the entire model within the calculator. There are currently 5 calculators. Suspension Calculator is an assorted collection of vehicle dynamics mathematical models that enable calculation of static and dynamic properties of a vehicle, thereby furnishing data that is required for the development of a vehicle’s suspension system. Interactive, free online geometry tool from GeoGebra: create triangles, circles, angles, transformations and much more!. Inactive ingredients: alcohol ( ≤ 1% v/v), carboxymethylcellulose sodium, flavor, glycerin, methylparaben, propylparaben, purified water, saccharin sodium, sodium citrate, and sucrose (50% w/v). REDUX Calculator - This calculator estimates your retirement benefits under the REDUX retirement plan for those who opted for the Career Status Bonus at 15 years of service. Mondraker Superfoxy R 29" Mountain Bike 2021 - Enduro Full Suspension MTB £4,499. From a simple measurement conversion to a fully featured double wishbone suspension calculator make racingaspirations. Final Pay Calculator - This calculator estimates your retirement benefits under the Final Pay retirement plan, for those members who first joined prior to September 8, 1980. ) To suspend your FEHBP coverage for this reason, you must give us evidence of your eligibility for TRICARE, TRICARE for Life, Peace Corps, or CHAMPVA. com for my suspension and love it. Whether you’re driving a Jeep Wrangler YJ, a Chevy S10 or C10 pickup, or Ford Bronco, Summit Racing carries several 4-link kits for trucks and SUVs. a correct red cell suspension would allow you to have the proper ratio between blood and reagent when you're doing blood bank testing. Specialized statement on COVID-19 Read Here. To use, measure the width, length and thickness of each leaf in the spring pack, and enter the measurements into the converter. When the suspension system is designed, a 1/4 model (one of the four wheels) is used to simplify the problem to a 1-D multiple spring-damper system. CVA SUSPENSION Niner’s Constantly Varying Arc, or CVA™, suspension design is a short dual-link, four-bar system unique to Niner, developed in house and patented. You can use this calculator to estimate your RMD for 2021, but you will have to estimate your account balance as of the end of 2020 — a difficult guess at best. Vehicle VIN: JTHD51FF4L5011464. Once the suspension service is completed, all C4s receive the same suspension alignment set-up specifications. This is fine as some suspension systems alter due to the arc pattern that the suspension takes when being moved up and down in bump and droop. This can help determing if you have fender issues with your tire. Use our online tool in order to calculate the trade-in value for your car or SUV and find your new or certified pre-owned Land Rover SUV today. Weight-based medication dosing. Its very hard to list down all the calculation involved in the suspension as it depends upon your imagination and vehicle understanding. Military Retirement. each particle weighs 10^-9g. ” Greg in Clovis, CA – Control Arm Service w/ PolyBronze. This can make it much easier to swallow the medication and get the relief or treatment you need. Purpose Pain reliever/fever reducer. Calculator Disclaimer We provide this calculator so that you can obtain an estimate of how much child support may be ordered in your case. When upfitting or modifying a vehicle, these calculations produce information to share with customers, and assist in understanding how certain industry standards may apply — both in spec. Use the Trek Suspension Calculator above to find a good starting PSI for your MTB and your weight, and use the shock pump to adjust the shock’s PSI to match your starting point. Secondly, most purpose-built ebikes have the battery pack (and standard capacity) built into the bike, but if you have an option of battery pack sizes, this calculator can give you an idea of. Suspending Your Retirement Benefit Payments. The military software products and training presentations found on this site were developed for the specific purpose of improving training techniques using interactive animation, video, graphics, tests, and instruction. Input your fully equipped weight, select your model of Trek mountain bike, and we'll calculate your suggested suspension settings. An infant acetaminophen suspension contains 80 mg/0. Our 4-Link Coil-Over Rear Suspension Systems for Ford’s Mustang and Fairlane cars are designed to provide exceptional traction, stability, handling, control, safety and performance, making your Ford muscle car a contender. This free online calculator is developed to calculate the slope and deflection at any point of the cantilever beam carrying point load, moment, uniformly distributed load(UDL) or uniformly varying load(UVL). Whether you’re driving a Jeep Wrangler YJ, a Chevy S10 or C10 pickup, or Ford Bronco, Summit Racing carries several 4-link kits for trucks and SUVs. Easy calculator for estimating gypsum drywall materials and supplies for walls and ceilings, room additions, room renovations and drywall repair. Tension is combated by the cables, which are stretched over the towers and held by the anchorages at each end of the bridge. Suspension Parts - Arnott AS-2605 Rating: 3 Stars A Solid 3-Star Experience For Me. The 2019 Ram 1500 4×4 Off-Road Package now stands alone and includes a one-inch suspension lift, with or without the available air suspension. Whether you’re driving a Jeep Wrangler YJ, a Chevy S10 or C10 pickup, or Ford Bronco, Summit Racing carries several 4-link kits for trucks and SUVs. Discussions: 641,303 Messages: 22,284,869 Members: 272,799. I have mixed feelings about this purchase. The Roll Center Calculator lets you input, view, save, analyze and compare either Double A Frame or McPherson Strut front suspension geometry. Patients Weighing 40 kg or More: Pediatric patients weighing 40 kg or more should be dosed according to adult recommendations. K-Tech Suspension is a premium ISO 9001 accredited brand providing solutions whether it be at the highest level of motorsport or your daily cruiser. 5 degree celcuis of a suspension containing 30g/L of solid particles. The gear ratio calculator will provide better use of the power band. Suspension calculator solving for spring rate constant given torsional or rigidity modulus, wire diameter, number of active coils and mean coil diameter. There's lot's too think about with a 3 link. Dealer Locator. Suspension redesign was a massive part of the evolution of the 2010 car. One company estimates you’ll need to replace each air suspension bag between 50,000 and 70,000 miles, while another estimates replacement every 10 years. Today, we’re a manufacturer known for leading the industry with new, innovative and winning engineered parts for racing and performance suspension, exhaust, brakes and cooling. The geometry in their formula eliminates any marginal grey area by averaging results against one-another from both directions. $350 penalty; You must attend a mandatory education program (for a second occurrence within 10 years) Third and subsequent offences within 5 years. The application offers accurate and efficient analysis tools for scrutinizing vehicle characteristics of race cars, passenger vehicles and. Use the suspension for steps 7, 8, and 9, and then prepare the chloroplasts for storage in step 10. The calculator is designed for inserts with 17 or 25 possible pin positions - a center position with 0. Our suspension calculator makes suspension set up easier than it's ever been. Lengthening the distance between the front and rear wheels gives a smoother ride on a straight line; shorten the distance and you'll get better cornering. Another method of calculating the CSED is to look at the “Date of Assessment” for a particular tax period if you have received IRS Form 668 (Y)(c) – Notice of Federal Tax Lien. For our calculator, 0 degrees is straight up and down. 3rd 4runner 4th 5th air area back battery black brake bumper buy car control cover door engine factory find fit front fuel gen good install installed iphone issue i’m kit led lift light lights limited miles mount oem oil part parts power pro rack rear replaced road roof running sale set shipping shocks side springs sr5 stock suspension switch. Calculate overtime pay for monthly-rated employees covered under Part IV of the Employment Act. Below you'll find the basic information you need to know about the DMV point system in California along with common violations that could result in a suspension of your driver's license. This calculator will recommend a mountain bike size based on your measurements. Download a 4-LInk Calculator. suspension price calculator days 1 2 3 4 5 6 7 8 9 10 1 £57 £57 £114 £171 £228 £285 £342 £399 £456 £513 £570 2 £114 £228 £342 £456 £570 £684 £798 £. 1mL = 20000 cells/mL. Our experts are here to help, call us and we will take care of you. I can use the 3 link calculator for the front correct? 2. com provides interactive military training tools and resources. The Humber Bridge, UK is seen below. The angle is your steering axis inclination (SAI), the angle between your lower ball joint and upper strut mount. Selecting a fourlink or ladder bar suspension Benefits of a FOURLINK: The improved suspension adjustability is the greatest benefit and can shorten quarter mile times, or more specifically, 60 ft. Total: 6,252 (members: 1,190, guests: 4,693, robots: 369) Forum Statistics. See professionally prepared estimates for trenching work. Figure 2: Top-level diagram of the suspension model. It is a versatile distribution that can take on the characteristics of other types of distributions, based on the value of the shape parameter, ${\beta} \,\!$. Anti-squat in the 4 link calculator, when used to model the front suspension, could/should be labeled pro-dive. Death Gratuity New Retired Benefits Program Office of the Actuary Savings Deposit Program Separation Pay Servicemembers Civil Relief Act. The calculations vary from motion ratios, to spring stiffness, to wheel loads. This Drywall Calculator Will Also Provide Recommended Amounts of Joint Compound, Joint Tape, and Drywall Nails or Screws. Download a 4-LInk Calculator. The key characteristic that differentiates a suspension mixture from a solution mixture is the heterogeneous property of a suspension, versus the homogeneous property of a solution. 1027 Pleasant Hill Road Somerset, PA 15501 (814) 445-2193. Originally designed specifically to account for the greater bottom bracket drop of 29ers, the CVA pivot and linkage layout is ideal for the range of large. Final Pay Calculator - This calculator estimates your retirement benefits under the Final Pay retirement plan, for those members who first joined prior to September 8, 1980. 7 10 objects Volume of object(s) ImL) 17 Suspension Method: Liquid mixture composition and black object behavior Liquid 1 Liquid 2 Cube behavior DI Water Methanol 2 mL suspended Add row 3 ml Calculated Density for the Different Methods: Liquid Displacement Method Suspension Method Density of. com aims to provide a calculator that will help you find a solution. Progressive suspension is the best way to lower your motorcycle. 928-362-1486 [email protected] Use this calculator to determine your required minimum distributions. i,m heavy and riding a 2 storke. 4 link suspensions also provide a better ride quality as well as better translation of power to the ground. When upfitting or modifying a vehicle, these calculations produce information to share with customers, and assist in understanding how certain industry standards may apply — both in spec. The force exerted on one tyre is 3600 N. Nystatin Oral Suspension is a cherry-mint flavored suspension for oral administration. a Each strength of suspension of AUGMENTIN is available as a chewable tablet for use by older children. Suspension Baseline Settings. To review the most recent Benchmarking Analysis by the Department of Highway Safety and Motor Vehicles, in which Florida’s operating costs and related fees were compared to several […]. Use our free VA mortgage calculator to quickly estimate what your new home will cost. We have information about the JobKeeper wage subsidy scheme, pay and leave entitlements, stand downs from work, workplace health and safety, and more. There are also various other settings, depending on the manufacturer. Once we process your request, we are not able to change the effective date. Most vehicles in Asia are. Armstrong Suspension Systems XL7540 White 48 in x 9/16 in x 1-11/16 in Product Category Grid Product Subcategory Grid Component Product Line 360 Painted Grid, SUPRAFINE XL 9/16" Exposed Tee. Sallie Mae is the nation’s saving, planning, and paying for college company, offering private education loans, free college planning tools, and online banking. Download a 4-LInk Calculator. Its very hard to list down all the calculation involved in the suspension as it depends upon your imagination and vehicle understanding. Suspension products for the world’s most exciting automobiles. 00 USD = 1,185. For any deviation plus or minus <7% use the calculated spring rate. Calculate your pay for working on a public holiday or for a public holiday falling on a non-working day. Öhlins Motorcycle Suspension Products & Services in Bristol Exceptional quality motorcycle suspension products, rear shock absorbers, fork springs, steering dampers. With upright bicycles and usual short wheelbase recumbents, the weight distribution between front and rear wheel is approximately 50/50, with long wheelbase recumbents 30/70 front/rear, with lowracers 60/40 front/rear. For snowmobiles, there is no specific measurement, but a number to shoot for is to have the front free sag be about 20% of the total amount of suspension travel in the front. Related Articles. Cristobal Lowery: For my Masters project in Imperial College London’s department of Mechanical Engineering, I have undertaken the task of programming an open-source Suspension Calculator. Unsprung weight is the vehicle weight that is not supported by the springs. This ibuprofen dosage calculator, a variation of our dosage calculator, is a tool which calculates the maximum permissible ibuprofen dosage for kids. 0 and the concentration of solids by weight is 33% with the solids having a specific gravity of 3. Calculate the horizontal and vertical force components at points A and B. Shockcraft is a New Zealand engineering business specialising in mountain bike suspension, parts & service. The Humber Bridge, UK is seen below. The R&D has already been done for you. Repetitive rear suspension failures and ultimately a drivetrain failure crippled the 2009 car. VSusp (vehicle suspension): Front View Suspension Geometry Calculator/Simulator Initializing - JavaScript is disabled or scripts are not loading - - JavaScript is required in order to use this simulator -. This calculator will determine your spring rate. We are the Tri-Cities leading independent Best Audi Car Repair Service Shop. Test drive this Certified Eminent White Pearl 2020 Lexus LS & experience the Sewell difference today. Weight-based medication dosing. If you are dead set on suspension enhancement the Timbrens # TTORTUN4 have been confirmed as a fit for your vehicle and would work well. Take the guesswork out of garage heater sizing. The ultimate guide to mountain bike rear suspension systems Explaining how the most popular designs work, plus their potential advantages or shortcomings Share on Facebook. AUGMENTIN ES-600 SUSPENSION prescription and dosage sizes information for physicians and healthcare professionals. For snowmobiles, there is no specific measurement, but a number to shoot for is to have the front free sag be about 20% of the total amount of suspension travel in the front. The longer the front travel, the stronger the emphasis is toward descending. Secondly, most purpose-built ebikes have the battery pack (and standard capacity) built into the bike, but if you have an option of battery pack sizes, this calculator can give you an idea of. This calculator is to help with the subtle science; the exact art you must provide. What the hell is the advanced calculator? The advanced calculator accounts for stiction of the suspension. Our auto repair estimate tool shows you parts and labor quotes from service shops near you. 0 has not been achieved, significant progress has been made. Units used are in, lbs or mm and Newton's perspective. COM Forza Tuning Calculator (BG55-FTC) Created by Basement & Garage (justLou72) Additional Assistance and Testing by the Up 2 Speed Custons (U2SC) Community. Read more about the prescription drug AMOXICILLIN SUSPENSION - ORAL. There are also various other settings, depending on the manufacturer. You know there are 2 moles of H+ ions (the. The triangulated 4 link rear suspension. Shop Fuel EX Frameset [Trek 263692] - Product detailsThe capable and stiff 130 mm frame is 29˝ and 27. The Benefits of a Suspension Calculator Posted by Brad on 10/14/2015 to Install Tips Fabricating a leaf spring or link suspension requires precise measurements, all of which determine the basic functionality and off-road performance of your rig. Redshift Sports innovative cycling components allow riders of all abilities to get the most out of the bikes they already own. 3 months, if your suspension is indefinite. The zoom extents button centers the model and zooms it to the point you can see the entire model within the calculator. Download 4 link suspension calculator for free. There's lot's too think about with a 3 link. From fork and shock springs, shocks, fluids and suspension tools, we have everything to keep you riding on the trail, track and street. OWC oscillating suspension assure a high shock-absorbing level due to their special shape featuring the interaction of four elastic torsional elements. The cost to Dig a Trench starts at$9. Use this online calculator to find the best non-echoing environment based on the needed parameters and the allotted dimensions given by user's. Our customers are very familiar with our Remote Reservoir which is known for its excellent performance. Jerry Bickel Race Cars gives Racers the technology to win!. 44822 Newton's. Delivered amount of dry oligo in tube. It does NOT represent your official grades. Calculator Disclaimer We provide this calculator so that you can obtain an estimate of how much child support may be ordered in your case. The work RVU calculator provides quick analysis of work relative value units associated with CPT® and HCPCS Level II codes. dip the loop into the bacteria suspension ; then vigorously mix a tube of melted agar with the loop; pour the agar into a sterile petri plate ; incubate the plate and count the colonies 20 to 30 hours later. Shop Fuel EX Frameset [Trek 263692] - Product detailsThe capable and stiff 130 mm frame is 29˝ and 27. 1) is quite easy. 7 then using the formula below: Calculate the Capacity of an Agitator & Conditioner. Designing an automotive suspension system is an interesting and challenging control problem. Long-travel suspension (greater than 120mm) is best for descending rough terrain at high speeds with greater control. Use at your own discretion. Use the landing section to quickly solve for any step. Epidemic Calculator. 5+ compatibleBoost148/110: stronger wheels, more tyre clearance, shorter staysExclusive suspension tech with ABP, Full Floater and RE:aktivStraight Shot stiffness, Knock Block frame defenceFeaturesSuspension CalculatorThis suspension calculator will help you optimise your suspension settings. To review the most recent Benchmarking Analysis by the Department of Highway Safety and Motor Vehicles, in which Florida’s operating costs and related fees were compared to several […]. Depending upon the purpose of experiment and research design , you have to collect blood in CPD or EDTA --> wash 3 times at 1500 rpm for 5 minutes with PBS and then, for example 100 ml 2% RBC. TNK Fork Tubes. Answer a few questions about your energy source and garage and the garage heater calculator free tool recommends models that will do the job. I haven't done much fact checking on it, but it seems pretty comprehensive for front suspension SLA designs. After Config, the next menu item should be [Front]. Without proper contact, you lose the ability to turn or. This puts the suspension into an active state, letting it react in both directions, keeping your tire glued to the dirt. The calculator assumes a linear suspension progression. SUSPENSION Pharmaceutical suspension may be defined as coarse dispersions in which insoluble solids are suspended in a liquid medium. It is now the fifth longest in the world with a centre span of 1,410 metres (4,626 ft) and a total length of 2,220 metres (7,283 ft). Fees are calculated based on today's date. Spring Rate Calculator for Independent Suspension. In many cases, we have been able to negotiate favourable severance terms with employers against a background of suspension, leading to a mutually agreed departure under a settlement agreement. COM Forza Tuning Calculator *Note: In FM7, torque is no longer listed in the Upgrades interface, but it can be found in the "My Garage" and "Select Car" interfaces. The first step in either building a 4-link suspension or troubleshooting an existing suspension is to download one of these Triaged calculators created by Dan Barcroft and plug in the dimensions and weights it asks for. STABILITY OF SUSPENSION It is important to understand that suspensions are kinetically stable, but thermodynamically unstable, system. K-Tech Suspension HPSF FF Suspension Fluid - #110-017-001 HPFS High Performance Front Fork Oil SAE 5w , 1 liter \$25. Triangulated 4 Link Rear Suspension System. Most vehicles in Asia are. Delivered amount of dry oligo in tube. The 4 link calculator is a tool to mainly give you something to measure your suspensions performace and give you an Ideah of what to change if you feel your suspension isn't reacting the way you would like. Use this calculator to determine your required minimum distributions. Suspension Type: Spring, Rear Tire Size: 295 75R22. Vari-Master 183 6E (5E+1) for sale at American Falls, Blackfoot, Idaho Falls, Rexburg, Rupert, Idaho. Suspension Calculator is an assorted collection of vehicle dynamics mathematical models that enable calculation of static and dynamic properties of a vehicle, thereby furnishing data that is required for the development of a vehicle's suspension system. Vehicle VIN: JTHD51FF4L5011464. Suspension Sag Calculator For manual testing and adjustment of a motorcycle's suspension sag, Racetech's method is the most accurate. Suspension calculator solving for torsional or rigidity modulus given mean coil diameter, wire diameter, spring rate constant and number of active coils. Calculators. dose (mg/kg/day) x weight (kg) concentration (mg/cc) x frequency Created: Tuesday, April 22, 2002 Last Modified:. Suspension, wheel and tire weights (Unsprung weight) affect the compliance of the suspension, which in turn affects handling, so keeping all these components as light as possible is an advantage. The work RVU calculator provides quick analysis of work relative value units associated with CPT® and HCPCS Level II codes. Final Pay Calculator - This calculator estimates your retirement benefits under the Final Pay retirement plan, for those members who first joined prior to September 8, 1980. 95 (USD) K-Tech Suspension Razor-R Rear Shock - #279S-015-230-020 GSX-S1000 15-18 Fully Adjustable/Remote Reservoir. How many mL of this suspension should be given to. Answer: A 3-year old child should be prescribed 4. The reduction or suspension of safe harbor matching contributions is effective no earlier than the later of 30 days after eligible employees are provided the supplemental notice and the amendment. Then calculate the favored speed and the natural frequency of the rear suspension of your tow vehicle. A suspension of child support from one month to the next is highly unlikely. Oak, Color: White. asked by johnson on August 5, 2015; Pharmacy math. 0 percent or higher. Suspension measurements used in the Suspension Tuning Spreadsheet. Performance Air Suspension Air Lift Performance is the performance division of Air Lift Company and produces full air ride and air management systems for lowered and performance vehicles. Recipe Dosing. I used www. Depending on the vehicle, these kits include springs, shocks, control arms, radius rods, and other attachment hardware to provide maximum lift, up to a whopping 10". There are also various other settings, depending on the manufacturer. Calculate the total quantity and the total days supply for the following Rx: Viravan DM 4 ounces 1 tea q12h prn -----The doctor has prescribed Viravan DM oral suspension, but many pharmacists may dispense Tannate-V-DM suspension. A suspension should be designed to use about 1/4 to 1/3 of it's maximum travel under normal load. For many, suspension is what makes a mountain bike a mountain bike. Demobilization and remobilization costs. They help you get the right height, clearance, and attitude on your truck and maintain that factory ride quality, while improving handling. Suspension calculator solving for torsional or rigidity modulus given mean coil diameter, wire diameter, spring rate constant and number of active coils. EATON Detroit Spring, Inc. Below you'll find the basic information you need to know about the DMV point system in California along with common violations that could result in a suspension of your driver's license. Download our suspension form, print it, fit it out and mail it in with your suspension. Wind can be detrimental to a bridge. The project is open source and in the language of Microsoft Visual C#, making it free for anyone to program their own updates and for people to share. The main reason for the difference is due to the different design goals between front and rear suspension, whereas suspension is usually symmetrical between the left and right of the vehicle. 2lbs, 1Stone = 14lbs. BMR is also known as your body’s metabolism; therefore, any increase to your metabolic weight, such as exercise, will increase your BMR. percentage grams (g) kilograms (kg) ounces (oz) pounds (lbs). This calculator will only give approximations depending on the accuracy of your information. Values of Higher Order can also be calculated. 6, imitation pineapple flavor, imitation orange flavor and purified water. Two 250 mg tablets of AUGMENTIN should not be substituted for one 500 mg tablet of AUGMENTIN. I've even watched a couple guides on YouTube and read instructional posts on various forums, but i just don't understand how it works. TruckWeight Mechanical Suspension Sensors provide measurements to within 1% of GVW by accurately measuring the deflection of the axle and provide per axle weights. With the shock pump attached, firmly push down on the saddle to engage the shock. To calculate suspension frequency for an individual corner, you need Mass and Spring rate: f = 1/(2π)√(K/M) f = Natural frequency (Hz) K = Spring rate (N/m) M = Mass (kg) When using these formulas, it is important to take Mass as the total sprung mass for the corner being calculated. An NJ Advance Media developer was able to calculate the county-level transmission rates using data from the state Department of Health and the same methodology the state uses. Purpose Pain reliever/fever reducer. Calculate adjustable instant center geometry for your Mustang drag car suspension with removable Instant Center plates. pdf), Text File (. Unsprung weight is the vehicle weight that is not supported by the springs. We produce suspension springs for Husqvarna off road bikes. Meant to be used in both the teaching and research laboratory, this calculator (see below) can be utilized to perform dilution calculations when working with solutions having mass per volume (i. How to calculate the volume of suspension for passaging cell. (Rider Sag). View these components like springs. Calculate Normality: grams active solute per liter of solution symbol: N Example: For acid-base reactions, what would be the normality of 1 M solution of sulfuric acid (H 2 SO 4) in water? Sulfuric acid is a strong acid that completely dissociates into its ions, H + and SO 4 2-, in aqueous solution. When the suspension system is designed, a 1/4 model (one of the four wheels) is used to simplify the problem to a 1-D multiple spring-damper system. A suspension is a heterogeneous mixture that has particles held in a liquid or gas that are not dissolved, such as sand in water, oil in water and smoke in air. The bridge will span 18m across a stream (anchor to anchor) in my backyard. The Benefits of a Suspension Calculator Posted by Brad on 10/14/2015 to Install Tips Fabricating a leaf spring or link suspension requires precise measurements, all of which determine the basic functionality and off-road performance of your rig. Use the suspension for steps 7, 8, and 9, and then prepare the chloroplasts for storage in step 10. Spring Rate Calculator for Independent Suspension. You know there are 2 moles of H+ ions (the. Timing Belt, Timing Belt Idler Pulley, and Water Pump Replacement. I haven't done much fact checking on it, but it seems pretty comprehensive for front suspension SLA designs. You enter heights above the ground and distance ahead of the front axle for up to 6 holes for all 4 brackets for your car, or any number of cars. © 2020 Diamondback Bicycles. Skyjacker® Suspension Lift Kits provide a smooth, comfortable ride, while responding to on or off-road conditions. BMR Calculator Basal Metabolic Rate is the number of calories required to keep your body functioning at rest. This model is for an active suspension system where an actuator is included that is able to generate the control force U to control the motion of the bus body. View these components like springs. Qes is a measurement of the control coming from the speaker's electrical suspension system (the voice coil and magnet). Suspension reinstatement committee appointments are available on a first come, first served basis. different reagents would require different percentages but. Use this calculator to make sure your tires are similar sizes front and rear! Tire Width Size 1 145 155 165 175 185 195 205 215 225 235 245 255 265 275 285 295 305 315 325 335 345. Call them, tell them all your info and your riding style, pace etc. calculate the ordered medication for each of the following IV bags to achieve the ordered concentration Ordered: Add 30 mEq KCl per L of IV fluid. AFCO racing and performance parts began over 30 years ago with a simple need for better suspension. Sallie Mae is the nation’s saving, planning, and paying for college company, offering private education loans, free college planning tools, and online banking. I just got a new axle, set up with a 3 link. Estimation of chlorophyll a concentration of the suspension. How many mL of this suspension should be given to. asked by johnson on August 5, 2015; Pharmacy math. Two 250 mg tablets of AUGMENTIN should not be substituted for one 500 mg tablet of AUGMENTIN. Search Suspension By Brand & Type Year 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 1993 1992 1991 1990 1989 1988 1987 1986 1985 1984 1983 1982 1981 1980 1979 1978 1977 1976 1975 1974 1973 1972 1971 1970 1969 1968 1967 1966 1965 1964 1957 1956 1955. The R&D has already been done for you. We then determine the correct Pro-Action Valve Bodies to be used to match up with the valving systems to be installed. Anti-squat in the 4 link calculator, when used to model the front suspension, could/should be labeled pro-dive. Please type the text you see in the image into the text box and submit. This tool can be used to determine how much water or buffer is needed to resuspend a dry lyophilized oligo in order to get a desired final concentration. Our customers are very familiar with our Remote Reservoir which is known for its excellent performance. Preload is simply the amount the springs are compressed while the suspension is fully extended. All you have to do is add your height, weight, and Specialized bike model, and you'll get all the info you need to take your ride to the next level. Designing an automotive suspension system is an interesting and challenging control problem. The tire's contact patch swings through an arc whose pivot point is the lower control arm's inner pivot. This is checked by holding rear shock absorber points & applying torque at front shock absorber points. Front Suspension Components, Brackets etc. ) It also contains a calculator for comparing tubular and solid Anti-Roll Bars. Suspension frequency is defined as the undamped natural frequency of the body in ride. Slavens Racing offers Suspension for Adventure, Enduro and Dirt Bike riders. Speaker Box Acoustic Suspension Enclosure Calculation Equivalent Volume (V as ) ft 3. The particle concentration is checked after the suspension is prepared and regularly monitored as part of the quality system checks. Along with higher car insurance rates and fines, accumulating too many points on your driving record can lead to a license suspension. Seleccione aquí para Español Fees for driver license and motor vehicle services are established in Florida Law. Find what others are running for wheels, tires, suspension, lighting, and more. Drive Belt Replacement. If you follow the link above and enter your vehicle's year, make and model you'll see what wheels are available by clicking the "wheels" arrow. Upper lateral link, lower parallel links, twin trailing link [Rear] A variation of the reversed lower A-arm style. 1mL) so the concentration will be 2000 cells/0. CB Performance Racing Products has VW Performance, Electronic Fuel Injection Systems, Turbos, CNC Ported Cylinder Heads, dune buggy parts, dropped spindles, Weber, Dellorto, crankshafts, connecting rods, complete turnkey engines and disc brake kits for aircooled volkswagens. com aims to provide a calculator that will help you find a solution. Calculate your suspension settings. My Tax Debt Is Older Than 10 Years But The CSED Hasn’t Elapsed. Specialized statement on COVID-19 Read Here. Torque is measured in Inch Pounds. Enter your suspension measurements in the white boxes below. i now run a 6. From oil changes to brakes, engine diagnostics and suspension - We handle it all. US Dept of Commerce National Oceanic and Atmospheric Administration National Weather Service El Paso, TX 7955 Airport Rd Santa Teresa, NM 88008 (575) 589-4088. An example of how you can calculate the slurry flow/volume of a given SG, %Solids and Tonnage. Bottom line though is that a front suspension is easier than a rear suspension in some ways. This is checked by holding rear shock absorber points & applying torque at front shock absorber points. SusProg3D is a complete 3D software package for the design, setup, evaluation and VISUALIZATION of race and road car SUSPENSION SYSTEMS Now with support for 30 different suspension types. The bridge will span 18m across a stream (anchor to anchor) in my backyard. Another method of calculating the CSED is to look at the “Date of Assessment” for a particular tax period if you have received IRS Form 668 (Y)(c) – Notice of Federal Tax Lien. Download Suspension Calculator - Perform suspension related calculations, including motion ratios, spring stiffness, wheel loads and more, check out various features and tools. After Config, the next menu item should be [Front]. Inactive ingredients: alcohol ( ≤ 1% v/v), carboxymethylcellulose sodium, flavor, glycerin, methylparaben, propylparaben, purified water, saccharin sodium, sodium citrate, and sucrose (50% w/v). Pre-load assigned to the spring. 0L Gtdi I4 Ecoboost Engine, Front Wheel Drive, Independent Rear Suspension, Unique St Sport Suspension, Rear Stabilizer Bar, Electric Pwr Assist Rack & Pinion Steering, 4-Wheel Pwr Disc Brakes, Easy Fuel Capless Fuel-Filler System, Dual Bright Tipped Exhaust, 18' Aluminum Wheels, 18' Y-Rated Summer. Calculate potential monthly payments and compare promotion options with the Payment Calculator from Synchrony. Ibuprofen Nurofen Dose Calculator 100mg in 5ml, enter childs weight in kg and get automatic calculation of the dose in mls and the frequency of the dose as well. Adults: 10 mg (10 mL of suspension or 2 medicine measures) taken 3 times per day, 15 to 30 minutes before meals and, if necessary, before retiring. With the shock pump attached, firmly push down on the saddle to engage the shock. 3rd 4runner 4th 5th air area back battery black brake bumper buy car control cover door engine factory find fit front fuel gen good install installed iphone issue i’m kit led lift light lights limited miles mount oem oil part parts power pro rack rear replaced road roof running sale set shipping shocks side springs sr5 stock suspension switch. Epidemic Calculator. Many athletes and bodybuilders will even inject Testosterone Suspension multiple times per day in order to achieve far more stable and steady optimal blood plasma levels. The geometry in their formula eliminates any marginal grey area by averaging results against one-another from both directions. Test drive this Certified Eminent White Pearl 2020 Lexus LS & experience the Sewell difference today. Apply your calculated suspension settings. M & N Calculator. Browse through the largest online truck fitment gallery, created for enthusiasts by enthusiasts. 1 Cycles per Earth day, based on a calendar day (also called a "synodic day") - the time from midnight to midnight on two successive days. With over 70,000 trucks in the gallery with detailed fitment information, it's easy to find your perfect set up. All in one suspension solution. Shock stroke - 1inch = 25. This cannot be appealed. Act 24, which lowered Pennsylvania's legal limit of alcohol from. , a rate of 4. Machine Design Equations, Applications and Calculators. Even if you agree to a BAC test and, if the test indicates that your BAC was. Find what others are running for wheels, tires, suspension, lighting, and more. The dyno data below at suspension velocities of 550 in/sec is very rare and important data in quantifying the effect of shim stack structure distortions on suspension performance. 4 mm 1 lbs = 4. Suspension elements can be adjusted for shock pressure, compression and rebound. Calculators. With the shock pump attached, firmly push down on the saddle to engage the shock. Editor’s Note: It’s been just over three years since Yeti Cycles unveiled its ‘Switch Technology’ eccentric link suspension platform with the immensely popular SB-66 and subsequent SB (Super Bike) models, and now they’re already introducing an altogether new system. FOA Coilovers will typically use a Dual Rate Spring setup. Oligo Resuspension Calculator Easily create a stock solution by allowing the resuspension calculator take the guesswork out of dissolving your oligo. The key characteristic that differentiates a suspension mixture from a solution mixture is the heterogeneous property of a suspension, versus the homogeneous property of a solution. 30 days, if your suspension was caused by failing to pay a traffic ticket. Check the gauge, and adjust the PSI again if necessary. Convert lbs/inch kg/mm Convert lbs/inch N/mm Convert kg/mm lbs/inch Convert kg/mm N/mm Convert N/mm lbs/inch Convert N/mm kg/mm Convert inches mm Convert mm inches. If for example the car is set up with a pair of 500 lb/in springs (K) and has a spring span (s) at the chassis of 48 inches the roll stiffness (RS f) is as shown below. Suspension Design by Rajeev Mokashi 6 Basic Suspension Terminology Ride Height 7. Coronavirus information: Find out about your workplace entitlements and obligations during the impact of coronavirus. Selecting a fourlink or ladder bar suspension Benefits of a FOURLINK: The improved suspension adjustability is the greatest benefit and can shorten quarter mile times, or more specifically, 60 ft. com provides interactive military training tools and resources. Then calculate the favored speed and the natural frequency of the rear suspension of your tow vehicle. Back to top Click on the desired Timing Calculator link below: 525-01/02 Calculator 525-03 Calculator 525-04 Calculator 307-02 Calculator. I have only owned the one race car, the first racing car car I've bought, but through a great deal of research & development I have learned quite a bit about making a car go, stop, and go around corners. Use the landing section to quickly solve for any step. Overdose of any medicine can be injurious for liver or kidney. Tension is combated by the cables, which are stretched over the towers and held by the anchorages at each end of the bridge. Suspension reinstatement committee appointments are available on a first come, first served basis. We produce suspension springs for Husqvarna off road bikes. GPA Calculator We have included a GPA Calculator to help you determine your GPA. ” Greg in Clovis, CA – Control Arm Service w/ PolyBronze. 55 per square foot*. You enter heights above the ground and distance ahead of the front axle for up to 6 holes for all 4 brackets for your car, or any number of cars. This model is for an active suspension system where an actuator is included that is able to generate the control force U to control the motion of the bus body. Complete IFS Kits - Car & Truck; Mustang II Parts - Brakes, Arms, Shocks, Racks etc; Mustang II Crossmembers - Weld On Items - Brackets; Suspension. xls), PDF File (. Editor’s Note: It’s been just over three years since Yeti Cycles unveiled its ‘Switch Technology’ eccentric link suspension platform with the immensely popular SB-66 and subsequent SB (Super Bike) models, and now they’re already introducing an altogether new system. EXCEPTION: Proration of payments may apply for a month following a month in payment status codes (PSC) N02, N03, N06, N07, N08, N13, N22 or N23. Bike Specific Exact Fit parts are designed specifically for your bike. We produce suspension springs for Husqvarna off road bikes. M & N Calculator. WP Authorized Center. A diagram of this system is shown below. Wedgewood Pharmacy’s oral suspensions and solutions are a familiar and convenient dosage form. Check the gauge, and adjust the PSI again if necessary. Two-in-One Remote Technology Spokes Calculator Contact Contact us. Learn about Function and Form. Divide the horizontal measurement by the vertical one with a calculator then take the inverse tangent (usually denoted by "Tan" with a small "-1," or by "Arctan" or "Atan"). The quicker the suspension has to move the more effect dampers will have. dip the loop into the bacteria suspension ; then vigorously mix a tube of melted agar with the loop; pour the agar into a sterile petri plate ; incubate the plate and count the colonies 20 to 30 hours later. com 0 Items. Amoxicillin Oral Suspension Sugar Free is for oral use. Bicycle Calculator - Graphically compare the frame size and geometry of different bicycles / calculate gear ratios, cadence and speed. The first is a raw equivalent mass calculation for each gear. This cannot be appealed. Calculator Disclaimer We provide this calculator so that you can obtain an estimate of how much child support may be ordered in your case. 3rd 4runner 4th 5th air area back battery black brake bumper buy car control cover door engine factory find fit front fuel gen good install installed iphone issue i’m kit led lift light lights limited miles mount oem oil part parts power pro rack rear replaced road roof running sale set shipping shocks side springs sr5 stock suspension switch. The dosage may be doubled. Use a local suspension shop. Originally designed specifically to account for the greater bottom bracket drop of 29ers, the CVA pivot and linkage layout is ideal for the range of large.
# This content is archived! For the 2018-2019 school year, we have switched to using the WLMOJ judge for all MCPT related content. This is an archive of our old website and will not be updated. # Disjoint Set This is an example of a disjoint set:{1, 2, 4}, {5}, {7, 8}. There are 3 sets in total, wich each one containing various elements. Note that no element is present in more than one set, making the collection of sets ‘disjoint’ from one another. A disjoint set supports the following operations: Operation Description find(e) Determine which set the given element e is in union(e1, e2) Merge the sets containing the elements e1 and e2 into one set Disjoint sets can be used to solve various problems quickly, an example given below: Given an undirected graph $G$, perform $Q$ operations, either of Type A or B. Type A: Create a bidirectional edge $(U,V)$ connecting vertices U, V Type B: Determine whether vertices $U,V$ are connected to each other # Explanation To keep track of the various elements in each set, a disjoint set appoints an arbitrary element from each set to act as the ‘representative’ of the entire set. Each set can now be represented as a rooted tree, with the representative of the set acting as the root. In the tree, each element is connected to another element, with the exception of the root, which is connected to itself. Only elements belonging to the same set will be connected to each other. To keep track of the various connections, a parent array is used, where parent[e] represents the parent element of e. An example is shown below, representing the collection of sets {1, 2, 4}, {5}, {7, 8}: # Implementation When initializing the disjoint set, it is assumed that all elements are initially in their own set. This can be implemented as follows: Using this ‘rooted tree’ structure, it is simple to find which set a given element belongs to, i.e. the find() operation. Using DFS, we can traverse up the tree of the given element until the root is found, and returned. Note that the find() operation modifies the elements to link directly to the root of the tree for faster access in the future. For example, if find(4) is called on the above graph, the tree is modified as illustrated below: Implementing the union() operation is also relatively easy. To merge the sets containing e1 and e2, the roots of the trees containing e1 and e2 must be found. If the roots are identical, e1 and e2 are already in the same set and no work needs to be done. Otherwise, the root of the larger tree will become the root of the smaller tree. This is done to improve the performance of the find() operation on elements of the smaller tree (as less parent values will have to be updated to account for the new root of the tree). But how do we know which tree has more elements? In order to always assign the smaller tree to the root of the larger tree, a second array, rank, will be used, where rank[e] represents the ‘depth’ of the tree rooted at e (relative to the other trees). If rank[e1] > rank[e2], this means that e1 is part of a larger tree than e2, and that parent[find(e2)] should be assigned to find(e1). The result of the operation union(4, 7) on the original example is shown below: The ideas discussed above are implemented below: ## Time Complexity Construction: $O(N)$, where $N$ is the total number of elements. Find operation: nearly $O(1)$ amortized, explanation of this is beyond the scope of this lesson. Union operation: nearly $O(1)$ amortized, explanation of this is beyond the scope of this lesson. ## Space Complexity $O(N)$, where $N$ is the total number of elements. # Conclusion The disjoint set is a powerful data structure which can perform find() and union() operations in near constant time. Using disjoint sets, the solution to the given problem can be implemented in approximately $O(N+Q)$ time. # Practice Disjoint Set Test
# Chapter 5 Exercise A 1. Solution: (a) For any $u\in U$, then $Tu=0\in U$ since $U\subset \m{null} T$, hence $U$ is invariant under $T$. (b) For any $u\in U$, then $Tu\in\m{range} T \subset U$, hence $U$ is invariant under $T$. 2. Solution: See Linear Algebra Done Right Solution Manual Chapter 5 Problem 4. Let $\lambda=0$. 3. Solution: For any $u\in \m{range} S$, there exists $v\in V$ such that $Sv=u$, hence $Tu=TSv=STv\in \m{range} S.$Therefore range $S$ is invariant under $T$. 4. Solution: See Linear Algebra Done Right Solution Manual Chapter 5 Problem 1. 5. Solution: See Linear Algebra Done Right Solution Manual Chapter 5 Problem 2. 6. Solution: See Linear Algebra Done Right Solution Manual Chapter 5 Problem 3. 7. Solution: Let $(x,y)$ be an eigenvector of $T$ corresponding to eigenvalue $\lambda$, then we have $T(x,y)=\lambda(x,y),$i.e., $(\lambda x,\lambda y)=(-3y,x)$. Hence we have $\lambda x=-3y$ and $\lambda y=x$, it follows that $\lambda^2xy=-3xy$. If $xy\ne 0$, then $\lambda^2=-3$, this is impossible. If $x=0$, then $y=0$ by $\lambda x=-3y$. However $(x,y)$ is an eigenvector, hence $(x,y)\ne (0,0)$. We get a contradiction. If $y=0$, then $x=0$ by $\lambda y=x$. Similarly, we get a contradiction. Hence no such eigenvectors exist, namely $T$ has no eigenvalues. 8. Solution: See Linear Algebra Done Right Solution Manual Chapter 5 Problem 5. 9. Solution: See Linear Algebra Done Right Solution Manual Chapter 5 Problem 6. 10. Solution: (a) Suppose $v=(v_1,\cdots,v_n)$ is a eigenvector of $T$ corresponding to eigenvalue $\lambda$. Then we have $Tv=\lambda v$, hence $$\label{5AP1}(v_1,2v_2,\cdots,v_n)=(\lambda v_1,\lambda v_2,\cdots,\lambda v_n).$$ As $v\ne 0$ by definition of eigenvectors, there is some $i\in \{1,2,\cdots,n\}$ such that $v_i\ne 0$. Note that we have $iv_i=\lambda v_i$ by $(\ref{5AP1})$, it implies $\lambda=i$. For $\lambda=i$, it is easy to solve $(\ref{5AP1})$. We can conclude the corresponding eigenvectors are of the form $(0,\cdots,0,a,0,\cdots,0),a\in\mb F$ with $a$ in $i$-th component. Similarly, all eigenvalues of $T$ are $1$, $2$, $\cdots$, $n$. All eigenvectors with respect to $i$ are of the form $(0,\cdots,0,a,0,\cdots,0),a\in\mb F$ with $a$ in $i$-th component. (b) Suppose $W$ is an invariant subspace of $T$. Assume $e_i=(0,\cdots,0,1,0,\cdots,0)$ with $1$ in $i$-th component. Then $e_1$, $\cdots$, $e_n$ is a basis of $\mb F^n$ and $e_i$ is an eigenvector of $T$ corresponding to $i$. If $a_1e_1+\cdots+a_ke_k\in W$ with $a_1\cdots a_k\ne 0$, we will show $\m{span}(e_{1},e_{2},\cdots,e_{k})\subset W$. Note that $a_1e_1+\cdots+a_ke_k\in W$ and $W$ is invariant with respect to $T$, it follows that $T(a_1e_1+\cdots+a_ke_k)=a_1e_1+\cdots+ka_ke_k\in W.$Hence $k(a_1e_1+\cdots+a_ke_k)-(a_1e_1+\cdots+ka_ke_k)=(k-1)a_1e_1+\cdots+a_{k-1}e_{k-1}\in W,$and the coefficients are nonzero. Inductively, we will get some $\lambda_1e_1+\cdots+\lambda_ie_i\in W$for $\lambda_1\cdots\lambda_i\ne 0$ for any $i\leqslant k$($\lambda_1$, $\cdots$, $\lambda_i$ change as $i$ changes). In particular, $\mu_1e_1\in W$ and $\mu_1\ne 0$. Hence $e_1\in W$. Then, consider $\eta_1e_1+\eta_2e_2\in W$ where $\eta_1\eta_2\ne 0$, we will get $e_2\in W$. Inductively, we can show that $\{e_{1},e_{2},\cdots,e_{k}\}\subset W$. Hence $\m{span}(e_{1},e_{2},\cdots,e_{k})\subset W$. Similarly, if $a_{i_1}e_{i_1}+\cdots+a_{i_k}e_{i_k}\in W$ with $a_{i_1}\cdots a_{i_k}\ne 0$ and all $\{a_{i_j}\}$ distinct, then $\m{span}(e_{i_1},\cdots,e_{i_k})\subset W$. Now let us consider the general form of $W$. Suppose $W\cap \{e_1,\cdots,e_n\}=\{e_{i_1},\cdots,e_{i_k}\}$, then we will show $\m{span}(e_{i_1},\cdots,e_{i_k})= W$. It is obvious that $\m{span}(e_{i_1},\cdots,e_{i_k})\subset W$. If there some $w\in W$ but $w\notin \m{span}(e_{i_1},\cdots,e_{i_k})$. Then $w$ can be written as $w=b_1e_1+\cdots+b_ne_n,\quad b_1,\cdots,b_n\in \mb F$such that there is some $s\notin \{i_1,\cdots,i_k\}$ and $b_s\ne 0$. By previous argument, we have $e_s\in W$. This contradicts with $W\cap \{e_1,\cdots,e_n\}=\{e_{i_1},\cdots,e_{i_k}\}$. Hence we show that $\m{span}(e_{i_1},\cdots,e_{i_k})= W$. Moreover, all invariant subspaces of $T$ have this form. I am not satisfied with this solution. ❗ 11. Solution: Suppose $\lambda$ is an eigenvalue of $T$ with an eigenvector $q$, then $q’=Tq=\lambda q.$Note that in general $\deg p'<\deg p$(because we consider $\deg 0=-\infty$). If $\lambda\ne 0$, then $\deg \lambda q>\deg q’$. We get a contradiction. If $\lambda=0$, then $q=c$ for nonzero $c\in\R$. Hence the only eigenvalue of $T$ is zero with nonzero constant polynomials as eigenvectors. 12. Solution: Suppose $\lambda$ is an eigenvalue of $T$ with an eigenvector $q$. Let $q=a_nx^n+\cdots+a_1x+a_0$ such that $a_n\ne 0$, then $\lambda q=Tq=xq’,$namely $\lambda a_nx^n+\cdots+\lambda a_1x+\lambda a_0=na_nx^n+\cdots+2a_2x^2+a_1x.$Since $a_n\ne 0$, it follows that $\lambda =n$ by considering the leading coefficient. Then we have $a_0=a_1=\cdots=a_{n-1}=0$, hence $q=a_nx^n$. Hence all eigenvalues of $T$ are $0,1,2,\cdots$ and all eigenvectors correspond to $m$ is $\lambda x^m$ such that $m\in \mb{N}$, $\lambda\ne 0$ and $\lambda\in\R$. 13. Solution: Let $\alpha_i\in\mb F$ such that $\left|\alpha_i-\lambda\right| = \frac{1}{1000+i},\quad i=1,\cdots,\dim V+1.$These $\alpha_i$ exist and are different from each other since $F=\R$ or $\C$. Note that each operator on $V$ has at most $\dim V$ distinct eigenvalues by 5.13. Hence there exists some $i\in\{1,2,\cdots,\dim V+1\}$ such that $\alpha_i$ is not an eigenvalue of $T$. Then by 5.6, $T-\alpha_i I$ is invertible. 14. Solution: Note that any $v\in V$ can be written uniquely as $u+w$ for $u \in U$ and $w \in W$ since $V=U\oplus W$. It follow that this $P$ is well-defined. Maybe you also need to check$P\in\ca L(V)$. Now let us consider the eigenvalues of $P$. By consider $v\ne 0$, if there exists $\lambda\in\mb F$ such that $Pv=\lambda v$. Write $v=u+w$ for $u \in U$ and $w \in W$, then $u$ and $w$ can not be both zero. Hence by definition of $P$, we have $Pv=u,\quad \lambda v=\lambda u+\lambda w.$It follows that $u=\lambda u+\lambda w$, namely $(\lambda-1)u+\lambda w=0$. Note that $V=U\oplus W$, it follows that $(\lambda-1)u=\lambda w=0$. If $u\ne 0$, then $\lambda =1$. Hence $w=0$ and the corresponding eigenvectors are nonzero vectors $v\in U$. If $w\ne 0$, then $\lambda =0$. Hence $u=0$ and the corresponding eigenvectors are nonzero vectors $v\in W$. 15. Solution: (a) Suppose $\lambda$ is an eigenvalue of $T$, then there exists a nonzero vector $v\in V$ such that $Tv=\lambda v$. Hence $S^{-1}TS(S^{-1}v)=S^{-1}Tv=S^{-1}(\lambda v)=\lambda S^{-1}v.$Note that $S^{-1}v\ne 0$ as $S^{-1}$ is invertible, hence $\lambda$ is an eigenvalue of $S^{-1}TS$, namely every eigenvalue of $T$ is an eigenvalue of $S^{-1}TS$. Similarly, note that $S(S^{-1}TS)S^{-1}=T$, we have every eigenvalue of $S^{-1}TS$ is an eigenvalue of $T$. Hence $T$ and $S^{-1}TS$ have the same eigenvalues. (b) From the process of (a), one can easily deduce that $v$ is an eigenvector of $T$ if and only if $S^{-1}v$ is an eigenvector of $S^{-1}TS$. 16. Solution: Although this problem is true for infinite-dimensional vector space, I will just consider finite-dimension case since we are considering the matrix of $T$(otherwise, it would be a infinite matrix). Suppose the matrix of $T$ with respect to basis $e_1$, $\cdots$, $e_n$ of $V$ contains only real entries. Then $Te_j=A_{1,j}e_1+\cdots+A_{n,j}e_n,$where $A_{i,j}\in\R$ for all $i,j=1,2,\cdots,n$. Let $v=k_1e_1+\cdots+k_ne_n$be a eigenvector with respect to $\lambda$, where $k_i\in\C$, $i=1,\cdots,n$. Then we have $Tv=\lambda v,$namely $$\label{5A161} \lambda\sum_{i=1}^nk_ie_i=\sum_{i=1}^nk_iTe_i=\sum_{i=1}^n\sum_{j=1}^nk_iA_{j,i}e_j.$$ Consider the complex conjugation of $(\ref{5A161})$, we have $$\label{5A162} \overline{\lambda}\sum_{i=1}^n\overline{k_i}e_i=\sum_{i=1}^nk_iTe_i=\sum_{i=1}^n\sum_{j=1}^n\overline{k_i}A_{j,i}e_j$$ since $A_{i,j}\in\R$ for all $i,j=1,2,\cdots,n$. (why? consider components)Note that $(\ref{5A162})$ implies $$\label{5A163} T(\overline{k_1}e_1+\cdots+\overline{k_n}e_n)=\overline{\lambda}\sum_{i=1}^n\overline{k_i}e_i.$$Since $v=k_1e_1+\cdots+k_ne_n\ne 0$, it follows that not all $k_i$ is zero, so is $\overline{k_i}$. Hence $\overline{k_1}e_1+\cdots+\overline{k_n}e_n\ne 0$, hence $(\ref{5A163})$ tell us $\bar{\lambda}$ is an eigenvalue of $T$. 17. Solution: See Linear Algebra Done Right Solution Manual Chapter 5 Problem 23. 18. Solution: Suppose $\lambda$ is an eigenvalue of $T$ and one corresponding eigenvector is $(w_1, w_2,\cdots)$. Then not all of $w_i$ is zero. Moreover, we have $(0,w_1, w_2,\cdots)=T(w_1, w_2,\cdots)=\lambda(w_1, w_2,\cdots).$If $\lambda=0$, then $(0,w_1, w_2,\cdots)=0$implies $w_i\equiv 0$ for any $i\in\mb N^+$. We get a contradiction. If $\lambda\ne 0$. Consider the first component, we have $0=\lambda w_1$, hence $w_1=0$. Then consider the second component, we have $\lambda w_2=w_1=0$, hence $w_2=0$. By induction, one can easily deduce that $w_i\equiv 0$ for any $i\in\mb N^+$. We get a contradiction as well. Hence $T$ has no eigenvalues. 19. Solution: See Linear Algebra Done Right Solution Manual Chapter 5 Problem 7. 20. Solution: See Linear Algebra Done Right Solution Manual Chapter 5 Problem 8. 21. Solution: See Linear Algebra Done Right Solution Manual Chapter 5 Problem 10. (b) is almost proved there. 22. Solution: Note that we have $T(v+w)=Tv+Tw=3w+3v=3(v+w),$and$T(v-w)=Tv-Tw=3w-3v=-3(v-w).$If $v-w$ or $v+w$ is nonzero, then $3$ or $-3$ is an eigenvalue of $T$. In fact if $v-w=0$ and $v+w=0$, it is easy to see $v=w=0$. It contradicts with $v\ne 0$ and $w\ne 0$. 23. Solution: See Linear Algebra Done Right Solution Manual Chapter 5 Problem 11. 24. Solution: (a) (a) If the sum of the entries in each row of $A$ equals $1$, then one can easily deduce that $T\left( \begin{array}{c} 1 \\ \vdots \\ 1 \\ \end{array} \right) =\left( \begin{array}{c} 1 \\ \vdots \\ 1 \\ \end{array} \right).$Hence $1$ is an eigenvalue of $T$ with $\left( \begin{array}{c} 1 \\ \vdots \\ 1 \\ \end{array} \right)$ as a corresponding eigenvector. (b) This problem is interesting. It is simple by considering determinant. However it is complicated here. We just need to show that $T-I$ is not invertible. It suffices to show $T-I$ is not surjective by 5.6. Note that we have $$\label{5AP241} (T-I)\left( \begin{array}{c} x_1 \\ \vdots \\ x_n \\ \end{array} \right)=\left( \begin{array}{c} \sum_{i=1}^n A_{1,i}x_i-x_1 \\ \vdots \\ \sum_{i=1}^n A_{n,i}x_i-x_n \\ \end{array} \right)=\left( \begin{array}{c} y_1 \\ \vdots \\ y_n \\ \end{array} \right),$$where $A_{i,j}$ is the $(i,j)$-component of $A$. Moreover, we have $1=\sum_{i=1}^n A_{i,j}\quad j=1,\cdots,n.$Hence \begin{align*} y_1+\cdots+y_n=&\sum_{j=1}^n\sum_{i=1}^n A_{j,i}x_i-\sum_{j=1}^n x_j\nonumber\\ =&\sum_{i=1}^nx_i \sum_{j=1}^n A_{j,i}-\sum_{j=1}^n x_j\\ =&\sum_{i=1}^nx_i-\sum_{j=1}^n x_j=0\nonumber.\end{align*} By $(\ref{5AP241})$ and previous equation, it follows that $\m{range}(T-I)\subset \{(x_1,\cdots,x_n)^T\in\mb F^{n}:x_1+\cdots+x_n=0\},$where $(x_1,\cdots,x_n)^T$ means $\left( \begin{array}{c} x_1 \\ \vdots \\ x_n \\ \end{array} \right)$. It follows that $T-I$ is not surjective, hence completing the proof. 25. Solution: Let the eigenvalues corresponding to $u,v$ are $\lambda_1,\lambda_2$ respectively, then we have $Tu=\lambda_1u,\quad Tv=\lambda_2 v.$If the eigenvalue corresponding to $u+v$ is $\lambda$, we have$\lambda(u+v)=T(u+v)=Tu+Tv=\lambda_1u+\lambda_2v.$It follows that $(\lambda-\lambda_1)u+(\lambda-\lambda_2)v=0$. If $\lambda_1\ne\lambda_2$, then $\lambda-\lambda_1$ and $\lambda-\lambda_2$ can not be both zero. Hence $u$, $v$ is not linearly independent. By 5.10, it follows that $u$ and $v$ correspond to the same eigenvalue. Hence $\lambda_1=\lambda_2$. 26. Solution: See Linear Algebra Done Right Solution Manual Chapter 5 Problem 12. 27. Solution: See Linear Algebra Done Right Solution Manual Chapter 5 Problem 13. 28. Solution: For any nonzero vector $v\in V$, extend it to a basis of $V$ as $v=v_1$, $v_2$, $\cdots$, $v_n$. Then $Tv_1=\sum_{k=1}^n\lambda_kv_k.$Consider $U=\m{span}(v_1,v_2)$, as $U$ is invariant under $T$ by assumption. It follows that $Tv_1\in U$. Hence $\lambda_3=\cdots=\lambda_n=0$. Similarly, consider $U=\m{span}(v_1,v_3)$ (note that $\dim V \ge 3$), we will conclude $\lambda_2=\lambda_4=\cdots=\lambda_n=0$. Hence $\lambda_2=\cdots=\lambda_n=0$. This means $v_1$ is an eigenvector of $T$. That is every nonzero vector in $V$ is an eigenvector of $T$ since $v$ is chosen arbitrarily. By Problem 26, we deduce that $T$ is a scalar multiple of the identity operator. 29. Solution: See Linear Algebra Done Right Solution Manual Chapter 5 Problem 9. 30. Solution: Note that $T$ has at most $\dim(\R^3)=3$ eigenvalues (by 5.13) and $4$, $5$, and $\sqrt{7}$ are eigenvalues of $T$, it follows that $9$ is not an eigenvalues of $T$. Hence $(T-9I)$ is surjective (by 5.6). Thus there exists $x\in\R^3$ such that $(T-9I)x=(4,5,\sqrt{7})$, namely $Tx-9x=(4,5,\sqrt{7})$. 31. Solution: If there exists $T\in \ca L(V)$ such that $v_1$, $\cdots$, $v_m$ are eigenvectors of $T$ corresponding to distinct eigenvalues. Then $v_1$, $\cdots$, $v_m$ is linearly independent by 5.10. Conversely, if $v_1$, $\cdots$, $v_m$ is linearly independent. Then we can extend it to a basis of $V$ as $v_1$, $\cdots$, $v_m$, $v_{m+1}$, $\cdots$, $v_n$. Define $T\in \ca L(V)$ by $Tv_i=iv_i,\quad i=1,\cdots,n.$Then $v_1$, $\cdots$, $v_m$ are eigenvectors of $T$ corresponding to eigenvalues $1$, $\cdots$, $m$, respectively. 32. Solution: Let $V=\m{span}($ $e^{\lambda_1 x}$, $\cdots$, $e^{\lambda_n x})$, and define an operator $T\in \ca L(V)$ by $Tf=f’$(You should check $T\in \ca L(V)$). Then consider $Te^{\lambda_i x}=\lambda_ie^{\lambda_i x}.$Hence $\lambda_i$ is an eigenvalue of $T$ with an corresponding eigenvector $e^{\lambda_i x}$. As $\lambda_1$, $\cdots$, $\lambda_n$ is a list of distinct real numbers, by 5.10, it follows that $e^{\lambda_1 x}$, $\cdots$, $e^{\lambda_n x}$ is linearly independent. 33. Solution: By definition, for any $x+\m{range} T\in V/(\m{range} T )$, we have $T/(\m{range} T )(x+\m{range} T)=Tx+\m{range} T.$Note that $Tx\in \m{range} T$, it follows that $T/(\m{range} T )(x+\m{range} T)=0$. Since $x+\m{range} T$ is choosed arbitrarily, we conclude that $T/(\m{range} T )=0$. 34. Solution: By definition, for any $x+\m{null} T\in V/(\m{null} T )$, we have $T/(\m{null} T )(x+\m{null} T)=Tx+\m{null} T.$Hence $T/(\m{null} T )$ is injective if and only if$Tx\in \m{null} T\iff x\in\m{null}T.$Note that$Tx\in \m{null} T\iff x\in\m{null}T$is equivalent to $\m{null} T\cap\m{range} T=\{0\}$. Because if we assume $Tx\in \m{null} T\iff x\in\m{null}T$, then for any $v\in\m{null} T\cap\m{range} T$, then there exists $u\in V$ such that $Tu=v$, hence $Tu\in \m{null} T$ implies $u\in\m{null}T$. That is $v=Tu=0$. The other direction is also true. Hence the proof is completing. 35. Solution: Suppose $\lambda\in\mb F$ is an eigenvalue of $T/U$, we need to show $\lambda$ is an eigenvalue of $T$. There exists a nonzero $x+U\in V/U$(i.e. $x\not\in U$) such that $(T/U)(x+U)=\lambda(x+U)\Longrightarrow Tx-\lambda x\in U.$If $\lambda$ is an eigenvalue of $T|_U$, then we are done. If $\lambda$ is not an eigenvalue of $T$, then $T|_U-\lambda I:U\to U$ is invertible by 5.6 (here use $\dim V<\infty$). Hence there exists a $y\in U$ such that $(T|_U-\lambda I)y=Tx-\lambda x\Longrightarrow Ty-\lambda y=Tx-\lambda x$since $Tx-\lambda x\in U$. Hence we have $T(x-y)=\lambda(x-y),$and $x-y\ne 0$ since $x\not\in U$ and $y\in U$. It follows that $\lambda$ is an eigenvalue of $T$. 36. Solution: In Problem 32, we showed $1=e^{0x}$, $e^x$, $e^{2x}$, $\cdots$ is linearly independent in the vector space of real-valued functions on $R$. Consider $V=\m{span}(1,e^x,e^{2x},\cdots)$ and $U=\m{span}(e^x,e^{2x},\cdots)$, then $U$ and $V$ are subspaces of the vector space of real-valued functions on $R$. Define $T\in\ca L(V)$ by $T(f)=e^xf$. Please check $T\in \ca L(V)$ and $U$ is invariant under $T$. Consider $T/U$, we have $(T/U)(1+U)=e^x+U=0.$Since $1\not\in U$, it follows that $0$ is an eigenvalue of $T/U$. However $0$ is not an eigenvalue of $T$. Otherwise suppose there exists a nonzero $f\in V$ such that $Tf=0$, then we have $e^xf=0$. Hence $f=0$ since $e^x\ne 0$ for any $x\in \R$. We get a contradiction.
# Inverse of DiracDelta at 0 is 99/5? Here in this other question of mine I asked the question, but maybe here is more pertinent. When using Mathematica we can find the following result: InverseFunction[DiracDelta][0] == 99/5 (* returns True *) Is this a bug or is it actually a real result? If it is a valid result, how can we prove it? • Returns False for me. May 6, 2020 at 21:55 • o.O, I cleaned my Kernel before trying. Are you using 12.1? What InverseFunction[DiracDelta][0] gives to you? May 6, 2020 at 21:57 • v10.4 - and InverseFunction[DiracDelta][0] returns 42/5. I'd call it a bug. Contact Wolfram support to ask them if they have any justification for this (although from the mathematical point of view this doesn't make sense), and report to them this is a likely bug. May 6, 2020 at 21:59 • The $\delta$-distribution is not a usual function so DiracDelta[99/5] makes no sense. Therefore, the input should be returned with an error message. May 7, 2020 at 10:39 • "an otherwise reasonable approach taken by InverseFunction" - my personal opinion is that InverseFunction[] should just refuse to work with things like DiracDelta[] and HeavisideTheta[], @Szabolcs. I guess we will have to agree to disagree here. May 7, 2020 at 11:09 I disagree that this is a bug. From the InverseFunction docs, As discussed in "Functions That Do Not Have Unique Values", many mathematical functions do not have unique inverses. In such cases, InverseFunction[f] can represent only one of the possible inverses for f. Thus, InverseFunction[f][x] returns some y such that f[y] == x. This is fine: DiracDelta[99/5] (* 0 *) Another comparable example: InverseFunction[UnitStep][1] (* 0 *) InverseFunction[UnitStep][0] (* -1 *) It seems to me that this is a GIGO situation because DiracDelta and UnitStep yield the same result for infinitely many inputs. Any of those inputs is consistent with the description of what InverseFunction does. But of course, this behaviour of InverseFunction must have been designed for the more practical case where there are a finite (or countable) number of solutions, such as InverseFunction[#^2 &][1] or InverseFunction[Sin][1]. • The $\delta$-distribution is not a usual function so DiracDelta[99/5] makes no sense. Therefore, the input should be returned with an error message May 7, 2020 at 10:42 • @user64494 By that thinking, there should be no DiracDelta at all in Mathematica. But I'm pretty sure that when Dirac, a physicist, imagined this, he used intuition from normal functions, despite the fact that the idea could only be made mathematically precise using distribution theory. May 7, 2020 at 10:43 • I think the correct implementation of $\delta$-distribution in Mathematica should be done. The current DiracDelta command is a primitive implementation which is buggy. May 7, 2020 at 10:48 • Also by the same thinking, there should be no Infinity in Mathematica, or in any floating point representation standards, because, of course, Infinity is not a number and 1/0 is just nonsense. The countless programs that make use of this value should all be thrown out and considered buggy, along with any CPU implementing IEEE754. May 7, 2020 at 10:50
PreprintPDF Available # Longest Property-Preserved Common Factor Authors: Preprints and early-stage research may not have been peer reviewed yet. ## Abstract In this paper we introduce a new family of string processing problems. We are given two or more strings and we are asked to compute a factor common to all strings that preserves a specific property and has maximal length. Here we consider three fundamental string properties: square-free factors, periodic factors, and palindromic factors under three different settings, one per property. In the first setting, we are given a string $x$ and we are asked to construct a data structure over $x$ answering the following type of on-line queries: given string $y$, find a longest square-free factor common to $x$ and $y$. In the second setting, we are given $k$ strings and an integer $1 < k'\leq k$ and we are asked to find a longest periodic factor common to at least $k'$ strings. In the third setting, we are given two strings and we are asked to find a longest palindromic factor common to the two strings. We present linear-time solutions for all settings. We anticipate that our paradigm can be extended to other string properties or settings. Longest Property-Preserved Common Factor Pisanti5, Solon P. Pissis6, and Giovanna Rosone7 1Department of Informatics, King’s College London, London, UK, 2Department of Informatics, Systems and Communication (DISCo), University of Milan-Bicocca, Italy, giulia.bernardini@unimib.it 3Department of Computer Science, University of Pisa, Italy and ERABLE Team, INRIA, France, grossi@di.unipi.it 4 Department of Informatics, King’s College London, London, UK, c.iliopoulos@kcl.ac.uk 5Department of Computer Science, University of Pisa, Italy and ERABLE Team, INRIA, France, pisanti@di.unipi.it 6 Department of Informatics, King’s College London, London, UK, solon.pissis@kcl.ac.uk 7Department of Computer Science, University of Pisa, Italy, giovanna.rosone@unipi.it Abstract In this paper we introduce a new family of string processing problems. We are given two or more strings and we are asked to compute a factor common to all strings that preserves a specific property and has maximal length. Here we consider three fundamental string properties: square-free factors, periodic factors, and palindromic factors under three different settings, one per property. In the first setting, we are given a string x and we are asked to construct a data structure over x answering the following type of on-line queries: given string y , find a longest square-free factor common to x and y . In the second setting, we are given k strings and an integer 1 < k0k and we are asked to find a longest periodic factor common to at least k0 strings. In the third setting, we are given two strings and we are asked to find a longest palindromic factor common to the two strings. We present linear-time solutions for all settings. We anticipate that our paradigm can be extended to other string properties or settings. 1 Introduction In the longest common factor problem, also known as longest common substring problem, we are given two strings x and y , each of length at most n , and we are asked to find a maximal-length string occurring in both x and y . This is a classical and well-studied problem in computer science arising out of different practical scenarios. It can be solved in O ( n ) time and space [ 10 , 18 ] (see also [ 21 , 26 ]). Recently, the same problem has been extensively studied under distance metrics; that is, the sought factors (one from x and one from y ) must be at distance at most k and have maximal length [8,28,27,2,25,24] (and references therein). In this paper we initiate a new related line of research. We are given two or more strings and our goal is to compute a factor common to all strings that preserves a specific property and has maximal length. An analogous line of research was introduced in [ 11 ]. It focuses on computing a subsequence (rather than a factor) common to all strings that preserves a specific property and has 1 arXiv:1810.02099v1 [cs.DS] 4 Oct 2018 maximal length. Specifically, in [ 11 , 3 , 19 ], the authors considered computing a longest common palindromic subsequence and in [20] computing a longest common square subsequence. We consider three fundamental string properties: square-free factors, periodic, and palindromic factors [ 23 ] under three different settings, one per property. In the first setting, we are given a string x and we are asked to construct a data structure over x answering the following type of on-line queries: given string y , find a longest square-free factor common to x and y . In the second setting, we are given k strings and an integer 1 < k0k and we are asked to find a longest periodic factor common to at least k0 strings. In the third setting, we are given two strings and we are asked to find a longest palindromic factor common to the two strings. We present linear-time solutions for all settings. We anticipate that our paradigm can be extended to other string properties or settings. 1.1 Definitions and Notation An alphabet Σ is a non-empty finite ordered set of letters of size σ = | Σ | . In this work we consider that σ = O (1) or that Σ is a linearly-sortable integer alphabet. A string x on an alphabet Σ is a sequence of elements of Σ. The set of all strings on an alphabet Σ, including the empty string ε of length 0, is denoted by Σ . For any string x , we denote by x [ i..j ] the substring (sometimes called factor) of x that starts at position i and ends at position j . In particular, x [0 ..j ] is the prefix of x that ends at position j , and x [ i..|x| − 1] is the suffix of x that starts at position i , where |x| denotes the length of x . A string uu , u Σ , is called a square. A square-free string is a string that does not contain a square as a factor. Aperiod of x [0 ..|x| − 1] is a positive integer p such that x [ i ] = x [ i + p ] holds for all 0 i < |x| − p . The smallest period of x is denoted by per ( x ). String u is called periodic if and only if per ( u ) ≤ |u|/ 2. Arun of string x is an interval [ i, j ] such that for the smallest period p = per ( x [ i..j ]) it holds that 2 pji + 1 and the periodicity cannot be extended to the left or right, i.e., i = 0 or x[i1] 6=x[i+p1], and, j=|x| − 1 or x[jp+ 1] 6=x[j+ 1]. We denote the reversal of x by string xR , i.e. xR = x [ |x| − 1] x [ |x| − 2] . . . x [0]. A string p is said to be a palindrome if and only if p = pR . If factor x [ i..j ], 0 ijn 1, of string x of length n is a palindrome, then i+j 2 is the center of x [ i..j ] in x and ji+1 2 x [ i..j ]. In other words, a palindrome is a string that reads the same forward and backward, i.e. a string p is a palindrome if p = yayR where y is a string, yR is the reversal of y and a is either a single letter or the empty string. Moreover, x [ i..j ] is called a palindromic factor of x . It is said to be a maximal palindrome if there is no other palindrome in x with center i+j 2 x has exactly 2 n 1 maximal palindromes. A maximal palindrome p of x can be encoded as a pair ( c, r ), where cis the center of pin xand ris the radius of p. 1.2 Algorithmic Toolbox The maximum number of runs in a string of length n is less than n [ 4 ], and, moreover, all runs can be computed in O(n) time [22,4]. The suffix tree ST ( x ) of a non-empty string x of length n is a compact trie representing all suffixes of x . ST ( x ) can be constructed in O ( n ) time [ 14 ]. We can analogously define and construct the generalised suffix tree GST ( x0, x1, . . . , xk1 ) for a set of k strings. We assume the reader is familiar with these data structures. The matching statistics capture all matches between two strings x and y [ 7 ]. More formally, the matching statistics of a string y [0 ..|y| − 1] with respect to a string x is an array MSy [0 ..|y| − 1], where MSy [ i ] is a pair ( i, pi ) such that (i) y [ i..i + i 1] is the longest prefix of y [ i..|y| − 1] that is 2 a factor of x ; and (ii) x [ pi..pi + i 1] = y [ i..i + i 1]. Matching statistics can be computed in O(|y|) time for σ=O(1) by using ST(x) [18,6,16]. Given a rooted tree T with n leaves coloured from 0 to k 1, 1 < k n , the colour set size problem is finding, for each internal node u of T , the number of different leaf colours in the subtree rooted at u. In [10], the authors present an O(n)-time solution to this problem. In the weighted ancestor problem, introduced in [ 15 ], we consider a rooted tree T with an integer weight function µ defined on the nodes. We require that the weight of the root is zero and the weight of any other node is strictly larger than the weight of its parent. A weighted ancestor query, given a node v and an integer value µ ( v ), asks for the highest ancestor u of v such that µ ( u ) , i.e., such an ancestor u that µ ( u ) and µ ( u ) is the smallest possible. When T is the suffix tree of a string x of length n , we can locate the locus of any factor of x [ i..j ] using a weighted ancestor query. We define the weight of a node of the suffix tree as the length of the string it represents. Thus a weighted ancestor query can be used for the terminal node corresponding to x [ i..n 1] to create (if necessary) and mark the node that corresponds to x [ i..j ]. Given a collection Q of weighted ancestor queries on a weighted tree T on n nodes with integer weights up to nO(1) , all the queries in Qcan be answered off-line in O(n+|Q|) time [5]. 2 Square-Free-Preserved Matching Statistics In this section, we introduce the square-free-preserved matching statistics problem and provide a linear-time solution. In the square-free-preserved matching statistics problem we are given a string x of length n and we are asked to construct a data structure over x on-line queries: given string y , find the longest square-free prefix of y [ i..|y| − 1] that is a factor of x , for all 0 i < |y| − 1. (For related work see [ 12 ].) We represent the answer using an integer array SQMSy [0 ..|y| − 1] of lengths, but we can trivially modify our algorithm to report the actual factors. It should be clear that a maximum element in SQMS gives the length of some longest square-free factor common to xand y. Construction. Our data structure over string xconsists of the following: An integer array Lx [0 ..n 1], where Lx [ i ] stores the length of the longest square-free factor starting at position iof string x. The suffix tree ST(x) of string x. The idea for constructing array Lxefficiently is based on the following crucial observation. Observation 1. If x [ i..n 1] contains a square then Lx [ i ] + 1, for all 0 i<n , is the length of the shortest prefix of x [ i..n 1] (factor f ) containing a square. In fact, the square is a suffix of f , otherwise f would not have been the shortest. If x [ i..n 1] does not contain a square then Lx[i] = ni. We thus shift our focus to computing the shortest such prefixes. We start by considering the runs of x . Specifically, we consider squares in x observing that a run [ , r ] with period p contains r 2 p + 2 squares of length 2 p with the leftmost one starting at position . Let r0 = +2 p 1 denote the ending position of the leftmost such square of the run. In order to find, for all i ’s, the shortest prefix of x[i..n 1] containing a square s, and thus compute Lx[i], we have two cases: 1. s is part of a run [ , r ] in x that starts after i . In particular, s = x [ ..r0 ] such that r0r , >i , and r0 is minimal. In this case the shortest factor has length + 2 pi ; we store this value in an integer array C [0 ..n 1]. If no run starts after position i we set C [ i ] = . To compute C , 3 after computing in O ( n ) time all the runs of x with their p and r0 [ 22 , 4 ], we sort them by r0 . A right-to-left scan after this sorting associates to ithe closest r0with > i. 2. s is part of a run [ , r ] in x and i [ , r ]. This implies that if ir 2 p +1 then a square starts at i and we store the length of the shortest such square in an integer array S [0 ..n 1]. If no square starts at position i we set S [ i ] = . Array S can be constructed in O ( n ) time by applying the algorithm of [13]. Since we do not know which of the two cases holds, we compute both C and S . By Observation 1, if C [ i ] = S [ i ] = ( x [ i..n 1] does not contain a square) we set Lx [ i ] = ni ; otherwise ( x [ i..n 1] contains a square) we set Lx[i] = min{C[i], S[i]} − 1. Finally, we build the suffix tree ST ( x ) of string x in O ( n ) time [ 14 ]. This completes our construction. Querying. We rely on the following fact for answering the queries efficiently. Fact 1. Every factor of a square-free string is square-free. Let string y be an on-line query. Using ST ( x ), we compute the matching statistics MSy of y with respect to x . For each j [0 ,|y| − 1], MSy [ j ] = ( i, i ) indicates that x [ i..i + i 1] = y [ j..j + i 1]. This computation can be done in O ( |y| ) time [ 18 , 6 ]. By applying Fact 1, we can answer any query yin O(|y|) time for σ=O(1) by setting SQMSy[j] = min{i, Lx[i]}, for all 0 j≤ |y| − 1. We arrive at the following result. Theorem 1. Given a string x of length n over an alphabet of size σ = O (1), we can construct a data structure of size O(n)in time O(n), answering SQMSyon-line queries in O(|y|)time. Proof. The time complexity of our algorithm follows from the above discussion. We next show the correctness of our algorithm. Let us first show the correctness of computing array Lx . The square contained in the shortest prefix of x [ i..n 1] (containing a square) starts by definition either at i or after i . If it starts at i this is correctly computed by the algorithm of [ 13 ] which assigns the length of the shortest such square in S [ i ]. If it starts after i it must be the leftmost square of another run by the runs definition. C [ i ] stores the length of the shortest prefix containing such a square. Then by Observation 1,Lx[i] is computed correctly. It suffices to show that, if w is the longest square-free substring common to x and y occurring at position ix in x and at position iy in y , then (i) MSy [ iy ] = ( , ix ) with ≥ |w| and x [ ix..ix + 1] = y [ iy..iy + 1]; (ii) w is a prefix of x [ ix..ix + Lx [ ix ] 1]; and (iii) SQMSy [ iy ] = |w| . Case (i) directly follows from the correctness of the matching statistics algorithm. For Case (ii), since w occurs at ix and w is square-free, Lx [ ix ] ≥ |w| . For Case (iii), since w is square-free we have to show that |w| = min{i, Lx [ i ] } . We know from (i) that ≥ |w| and from (ii) that Lx [ ix ] ≥ |w| . If min{i, Lx [ i ] } = , then w cannot be extended because the possibly longer than |w| square-free string occurring at ix does not occur in y , and in this case |w| = . Otherwise, if min{i, Lx [ i ] } = Lx [ ix ] then w cannot be extended because it is no longer square-free, and in this case |w| = Lx [ ix ]. Hence we conclude that SQMSy[iy] = |w|. The statement follows. The following example provides a complete overview of the workings of our algorithm. Example 1. Let x = aababaababb and y = babababbaaab . The length of a longest common square-free factor is 3, and the factors are bab and aba. 4 i0 1 2 3 4 5 6 7 8 9 10 x[i]aababaababb C[i] 5 6 5 4 3 5 5 4 3 ∞ ∞ S[i] 2 4 4 6 2 4 ∞ ∞ 2 Lx[i] 1 3 3 3 2 1 3 3 2 1 1 j0 1 2 3 4 5 6 7 8 9 10 11 y[j]b a b a b a b b a a a b MSy[j] (4,2) (5,1) (4,2) (5,6) (4,7) (3,8) (2,9) (3,4) (2,0) (3,0) (2,1) (1,2) SQMSy[j] 3 3 3 3 3 2 1 2 1 1 2 1 3 Longest Periodic-Preserved Common Factor In this section, we introduce the longest periodic-preserved common factor problem and provide a linear-time solution. In the longest periodic-preserved common factor problem, we are given k 2 strings x0, x1, . . . , xk1 of total length N and an integer 1 < k0k , and we are asked to find a longest periodic factor common to at least k0 strings. In what follows we present two different algorithms to solve this problem. We represent the answer LPCFk0 by the length of a longest factor, but we can trivially modify our algorithms to report an actual factor. Our first algorithm, denoted by lPcf, works as follows. 1. Compute the runs of string xj, for all 0 j < k. 2. Construct the generalised suffix tree GST(x0, x1, . . . , xk1) of x0, x1, . . . , xk1. 3. For each string xj and for each run [ , r ] with period p of xj , augment GST with the explicit node spelling xj [ ..r ], decorate it with p , and mark it as a candidate node. This can be done as follows: for each run [ , r ] of xj , for all 0 j < k , find the leaf corresponding to xj [ ..|xj| 1] and answer the weighted ancestor query in GST with weight r +1. Moreover, mark as candidates all explicit nodes spelling a prefix of length dof any run [, r] with 2pd. 4. Mark as good the nodes of the tree having at least k0 different colours on the leaves of the subtree rooted there. Let aGST be this augmented tree. 5. Return as LPCFk0 the string depth of a candidate node in aGST which is also a good node, and that has maximal string depth (if any, otherwise return 0). Theorem 2. Given k strings of total length N on alphabet Σ = { 1 , . . . , N O(1)} , and an integer 1< k0k, algorithm lPcf returns LPCFk0in time O(N). Proof. Let us assume wlog that k0 = k , and let w with period p be the longest periodic factor common to all strings. By the construction of aGST (Steps 1-4), the path spelling w node nwas woccurs in all the strings. We make the following observation. Observation 2. Each periodic factor with period p of string x is a factor of x [ i..j ], where [ i, j ]is a run with period p. By Observation 2, in all strings, w is included in a run having the same period. Observe that for at least one of the strings, there is a run ending with w , otherwise we could extend w obtaining a longer periodic common factor (similarly, for at least one of the strings, there is a run starting with w ). Therefore nw is both a good and a candidate node. By definition, nw is at string depth at 5 Figure 1: aGST for x=ababbabba,y=ababaab, and k=k0= 2. least 2 p and, by construction, LPCFk0 is the string depth of a deepest such node; thus |w| will be returned by Step 5. As for the time complexity, Step 1 [ 22 , 4 ] and Step 2 [ 14 ] can be done in O ( N ) time. Since the total number of runs is less than N [ 4 ], Step 3 can be done in O ( N ) time using off-line weighted ancestor queries [ 5 ] to mark the runs as candidate nodes; and then a post-order traversal to mark their ancestor explicit nodes as candidates, if their string-depth is at least 2 p for any run [ , r ] with period p . The size of the aGST is still in O ( N ). Step 4 can be done in O ( N ) time [ 10 ]. Step 5 can be done in O(N) by a post-order traversal of aGST. The following example provides a complete overview of the workings of our algorithm. Example 2. Consider x = ababbabba , y = ababaab , and k = k0 =2. The runs of x are: r0 = [0 , 3], per ( abab ) = 2, r1 = [1 , 8], per ( babbabba ) = 3, r2 = [3 , 4], per ( bb ) = 1, and r3 = [6 , 7], per ( bb ) = 1; those of y are r4 = [0 , 4], per ( ababa ) = 2 and r5 = [4 , 5], per ( aa ) = 1. Fig 1shows aGST for x , y , and k = k0 =2. Algorithm lPcf outputs 4 = |abab| , with per ( abab ) = 2, as the node spelling abab is the deepest good one that is also a candidate. We next present a second algorithm to solve this problem with the same time complexity but without the use of off-line weighted ancestor queries. The algorithm works as follows. 1. Compute the runs of string xj, for all 0 j < k. 2. Construct the generalised suffix tree GST(x0, x1, . . . , xk1) of x0, x1, . . . , xk1. 3. Mark as good the nodes of GST having at least k0 different colours on the leaves of the subtree rooted there. 4. Compute and store, for every leaf node, the nearest ancestor that is good. 5. For each string xj and for each run [ , r ] with period p of xj , check the nearest good ancestor for the leaf corresponding to xj [ ..|xj| − 1]. Let d be the string-depth of the nearest good ancestor. Then: (a) If r+ 1 d, the entire run is also good. (b) If r+ 1 > d, check if 2pd, and if so the string for the good ancestor is periodic. 6 a b $x a$x b $y a a a$x b a$x b aa$x a a$x bb$y 5 4 3 2 0 1 5 4 3 2 1 0 Figure 2: GST for x=ababaa,y=bababb, and k=k0= 2. Good nodes are marked red. 6. Return as LPCFk0the maximal string depth found in Step 5 (if any, otherwise return 0). Let us analyse this algorithm. Let us assume wlog that k0 = k , and let w with period p be the longest periodic factor common to all strings. By the construction of GST (Steps 1-3), the path spelling wleads to a good node nwas woccurs in all the strings. By Observation 2, in all strings, w is included in a run having the same period. Observe that for at least one of the strings, there is a run starting with w , otherwise we could extend w obtaining a longer periodic common factor. So the algorithm should check, for each run, if there is a periodic-preserved common prefix of the run and take the longest such prefix. LPCFk0 is the string depth of a deepest good node spelling a periodic factor; thus |w| will be returned by Step 6. As for the time complexity, Step 1 [ 22 , 4 ] and Step 2 [ 14 ] can be done in O ( N ) time. Step 3 can be done in O ( N ) time [ 10 ] and Step 4 can be done in O ( N ) time by using a tree traversal. Since the total number of runs is less than N [ 4 ], Step 5 can be done in O ( N ) time. We thus arrive at Theorem 2with a different algorithm. The following example provides a complete overview of the workings of our algorithm. Example 3. Consider x = ababaa , y = bababb , and k = k0 = 2. The runs of x are: r0 = [0 , 4], per ( ababa ) = 2, r1 = [4 , 5], per ( aa ) = 1; those of y are r2 = [0 , 4], per ( babab ) = 2 and r3 = [4 , 5], per ( bb ) = 1. Fig 2shows GST for x , y , and k = k0 = 2. Consider the run r0 = [0 , 4]. The nearest good node of leaf spelling x [0 ..|x| − 1] is the node spelling abab . We have that r + 1 = 5 > d = 4, and 2 p = 4 d = 4. The algorithm outputs 4 = |abab| as abab is a longest periodic-preserved common factor. Another longest periodic-preserved common factor is baba. 7 4 Longest Palindromic-Preserved Common Factor In this section, we introduce the longest palindromic-preserved common factor problem and provide a linear-time solution. In the longest palindromic-preserved common factor problem, we are given two strings x and y , and we are asked to find a longest palindromic factor common to the two strings. (For related work in a dynamic setting see [ 17 , 1 LPALCF by the length of a longest factor, but we can trivially modify our algorithm to report an actual factor. Our algorithm is denoted by lPalcf. In the description below, for clarity, we consider odd-length palindromes only. (Even-length palindromes can be handled in an analogous manner.) 1. Compute the maximal odd-length palindromes of x and the maximal odd-length palindromes of y. 2. Collect the factors x [ i..i0 ] of x (resp. the factors y [ j..j0 ] of y ) such that i ( j ) is the center of an odd-length maximal palindrome of x ( y ) and i0 ( j0 ) is the ending position of the odd-length maximal palindrome centered at i(j). 3. Create a lexicographically sorted list Lof these strings from xand y. 4. Compute the longest common prefix of consecutive entries (strings) in L. 5. Let be the maximal length of longest common prefixes between any string from x and any string from y. For odd lengths, return LPALCF= 21. Theorem 3. Given two strings x and y on alphabet Σ = { 1 ,..., ( |x| + |y| ) O(1)} , algorithm lPalcf returns LPALCF in time O(|x|+|y|). Proof. The correctness of our algorithm follows directly from the following observation. Observation 3. Any longest palindromic-preserved common factor is a factor of a maximal palindrome of x with the same center and a factor of a maximal palindrome of y with the same center. Step 1 can be done in O ( |x| + |y| ) time [ 18 ]. Step 2 can be done in O ( |x| + |y| ) time by going through the set of maximal palindromes computed in Step 1. Step 3 and Step 4 can be done in O ( |x| + |y| ) time by constructing the data structure of [ 9 ]. Step 5 can be done in O ( |x| + |y| ) time by going through the list of computed longest common prefixes. The following example provides a complete overview of the workings of our algorithm. Example 4. Consider x = ababaa and y = bababb . In Step 1 we compute all maximal palindromes of x and y . Considering odd-length palindromes gives the following factors (Step 2) from x : x [0 .. 0] = a , x [1 .. 2] = ba , x [2 .. 4] = aba , x [3 .. 4] = ba , x [4 .. 4] = a , and x [5 .. 5] = a . The analogous factors from y are: y [0 .. 0] = b , y [1 .. 2] = ab , y [2 .. 4] = bab , y [3 .. 4] = ab , y [4 .. 4] = b , and y [5 .. 5] = b . We sort these strings lexicographically and compute the longest common prefix information (Steps 3-4). We find that = 2: the maximal longest common prefixes are ba and ab , denoting that aba and bab are the longest palindromic-preserved common factors of odd length. In fact, algorithm lPalcf outputs 21 = 3 as aba and bab are the longest palindromic-preserved common factors of any length. 8 5 Final Remarks In this paper, we introduced a new family of string processing problems. The goal is to compute factors common to a set of strings preserving a specific property and having maximal length. We showed linear-time algorithms for square-free, periodic, and palindromic factors under three different settings. We anticipate that our paradigm can be extended to other string properties or settings. Acknowledgements We would like to acknowledge an anonymous reviewer of a previous version of this paper who sug- gested the second linear-time algorithm for computing the longest periodic-preserved common factor. Solon P. Pissis and Giovanna Rosone are partially supported by the Royal Society project IE 161274 “Processing uncertain sequences: combinatorics and applications”. Giovanna Rosone and Nadia Pisanti are partially supported by the project Italian MIUR-SIR CMACBioSeq (“Combinatorial methods for analysis and compression of biological sequences”) grant n. RBSI146R5L. References [1] Amihood Amir, Panagiotis Charalampopoulos, Solon P. Pissis, and Jakub Radoszewski. Longest common factor made fully dynamic. CoRR, abs/1804.08731, 2018. [2] Lorraine A. K. Ayad, Carl Barton, Panagiotis Charalampopoulos, Costas S. Iliopoulos, and Solon P. Pissis. Longest common prefixes with k -errors and applications. In SPIRE, volume 11147 of LNCS, pages 27–41. Springer, 2018. [3] Sang Won Bae and Inbok Lee. On finding a longest common palindromic subsequence. Theoretical Computer Science, 710:29–34, 2018. Advances in Algorithms & Combinatorics on Strings (Honoring 60th birthday for Prof. Costas S. Iliopoulos). [4] Hideo Bannai, Tomohiro I, Shunsuke Inenaga, Yuto Nakashima, Masayuki Takeda, and Kazuya Tsuruta. The “runs” theorem. SIAM Journal on Computing, 46(5):1501–1514, 2017. [5] Carl Barton, Tomasz Kociumaka, Chang Liu, Solon P. Pissis, and Jakub Radoszewski. Indexing weighted sequences: Neat and efficient. CoRR, abs/1704.07625, 2017. [6] Djamal Belazzougui and Fabio Cunial. Indexed matching statistics and shortest unique substrings. In Edleno Silva de Moura and Maxime Crochemore, editors, 21st International Symposium on String Processing and Information Retrieval (SPIRE), volume 8799 of LNCS, pages 179–190, 2014. [7] W. I. Chang and E. L. Lawler. Sublinear approximate string matching and biological applications. Algorithmica, 12(4):327–344, 1994. [8] Panagiotis Charalampopoulos, Maxime Crochemore, Costas S. Iliopoulos, Tomasz Kociumaka, Solon P. Pissis, Jakub Radoszewski, Wojciech Rytter, and Tomasz Walen. Linear-time algorithm for long LCF with k mismatches. In CPM, volume 105 of LIPIcs, pages 23:1–23:16. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2018. [9] Panagiotis Charalampopoulos, Costas S. Iliopoulos, Chang Liu, and Solon P. Pissis. Property suffix array with applications. In Michael A. Bender, Martin Farach-Colton, and Miguel A. 9 Mosteiro, editors, LATIN 2018: Theoretical Informatics - 13th Latin American Symposium, Buenos Aires, Argentina, April 16-19, 2018, Proceedings, volume 10807 of Lecture Notes in Computer Science, pages 290–302. Springer, 2018. [10] Lucas Chi and Kwong Hui. Color set size problem with applications to string matching. In Combinatorial Pattern Matching, pages 230–243. Springer Berlin Heidelberg, 1992. [11] Shihabur Rahman Chowdhury, Md. Mahbubul Hasan, Sumaiya Iqbal, and M. Sohel Rahman. Computing a longest common palindromic subsequence. Fundam. Inf., 129(4):329–340, 2014. [12] Marius Dumitran, Florin Manea, and Dirk Nowotka. On prefix/suffix-square free words. In Costas S. Iliopoulos, Simon J. Puglisi, and Emine Yilmaz, editors, 22nd International Symposium, on String Processing and Information Retrieval (SPIRE), volume 9309 of LNCS, pages 54–66, 2015. [13] Jean-Pierre Duval, Roman Kolpakov, Gregory Kucherov, Thierry Lecroq, and Arnaud Lefebvre. Linear-time computation of local periods. Theoretical Computer Science, 326(1):229–240, 2004. [14] Martin Farach. Optimal suffix tree construction with large alphabets. In 38th Annual Symposium on Foundations of Computer Science (FOCS), pages 137–143, 1997. [15] Martin Farach and S. Muthukrishnan. Perfect hashing for strings: Formalization and algorithms. In 7th Symposium on Combinatorial Pattern Matching (CPM), pages 130–140. 1996. [16] Maria Federico and Nadia Pisanti. Suffix tree characterization of maximal motifs in biological sequences. Theor. Comput. Sci., 410(43):4391–4401, 2009. [17] Mitsuru Funakoshi, Yuto Nakashima, Shunsuke Inenaga, Hideo Bannai, and Masayuki Takeda. Longest substring palindrome after edit. In Gonzalo Navarro, David Sankoff, and Binhai Zhu, editors, Annual Symposium on Combinatorial Pattern Matching (CPM 2018), volume 105 of Leibniz International Proceedings in Informatics (LIPIcs), pages 12:1–12:14, Dagstuhl, Germany, 2018. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik. [18] Dan Gusfield. Algorithms on Strings, Trees, and Sequences - Computer Science and Computa- tional Biology. Cambridge University Press, 1997. [19] Shunsuke Inenaga and Heikki Hyyr¨o. A hardness result and new algorithm for the longest common palindromic subsequence problem. Information Processing Letters, 129:11–15, 2018. [20] Takafumi Inoue, Shunsuke Inenaga, Heikki Hyyr¨o, Hideo Bannai, and Masayuki Takeda. Computing longest common square subsequences. In 29th Symposium on Combinatorial Pattern Matching (CPM), volume 105 of LIPIcs, pages 15:1–15:13, 2018. [21] Tomasz Kociumaka, Tatiana A. Starikovskaya, and Hjalte Wedel Vildhøj. Sublinear space algorithms for the longest common substring problem. In Algorithms - ESA 2014 - 22th Annual European Symposium, Wroclaw, Poland, September 8-10, 2014. Proceedings, pages 605–617, 2014. [22] Roman Kolpakov and Gregory Kucherov. Finding maximal repetitions in a word in linear time. In 40th Symposium on Foundations of Comp Science, pages 596–604, 1999. [23] M. Lothaire. Applied Combinatorics on Words. Encyclopedia of Mathematics and its Applica- tions. Cambridge University Press, 2005. 10 [24] Pierre Peterlongo, Nadia Pisanti, Fed´eric Boyer, Alair Pereira do Lago, and Marie-France Sagot. Lossless filter for multiple repetitions with hamming distance. J. Discr. Alg., 6(3):497–509, 2008. [25] Pierre Peterlongo, Nadia Pisanti, Fed´eric Boyer, and Marie-France Sagot. Lossless filter for finding long multiple approximate repetitions using a new data structure, the bi-factor array. In 12th International Symposium String Processing and Information Retrieval, 12th International Conference (SPIRE), pages 179–190, 2005. [26] Tatiana A. Starikovskaya and Hjalte Wedel Vildhøj. Time-space trade-offs for the longest common substring problem. In 24th Symposium on Combinatorial Pattern Matching (CPM), pages 223–234, 2013. [27] Sharma V. Thankachan, Chaitanya Aluru, Sriram P. Chockalingam, and Srinivas Aluru. Algorithmic framework for approximate matching under bounded edits with applications to sequence analysis. In RECOMB, volume 10812 of LNCS, pages 211–224, 2018. [28] Sharma V. Thankachan, Alberto Apostolico, and Srinivas Aluru. A provably efficient algorithm for the k-mismatch average common substring problem. Journal of Computational Biology, 23(6):472–482, 2016. 11 ResearchGate has not been able to resolve any citations for this publication. Article Full-text available In the Longest Common Factor with $k$ Mismatches (LCF$_k$) problem, we are given two strings $X$ and $Y$ of total length $n$, and we are asked to find a pair of maximal-length factors, one of $X$ and the other of $Y$, such that their Hamming distance is at most $k$. Thankachan et al. show that this problem can be solved in $\mathcal{O}(n \log^k n)$ time and $\mathcal{O}(n)$ space for constant $k$. We consider the LCF$_k$($\ell$) problem in which we assume that the sought factors have length at least $\ell$, and the LCF$_k$($\ell$) problem for $\ell=\Omega(\log^{2k+2} n)$, which we call the Long LCF$_k$ problem. We use difference covers to reduce the Long LCF$_k$ problem to a task involving $m=\mathcal{O}(n/\log^{k+1}n)$ synchronized factors. The latter can be solved in $\mathcal{O}(m \log^{k+1}m)$ time, which results in a linear-time algorithm for Long LCF$_k$. In general, our solution to LCF$_k$($\ell$) for arbitrary $\ell$ takes $\mathcal{O}(n + n \log^{k+1} n/\sqrt{\ell})$ time. Article Full-text available Although real-world text datasets, such as DNA sequences, are far from being uniformly random, average-case string searching algorithms perform significantly better than worst-case ones in most applications of interest. In this paper, we study the problem of computing the longest prefix of each suffix of a given string of length $n$ over a constant-sized alphabet that occurs elsewhere in the string with $k$-errors. This problem has already been studied under the Hamming distance model. Our first result is an improvement upon the state-of-the-art average-case time complexity for non-constant $k$ and using only linear space under the Hamming distance model. Notably, we show that our technique can be extended to the edit distance model with the same time and space complexities. Specifically, our algorithms run in $\mathcal{O}(n \log^k n \log \log n)$ time on average using $\mathcal{O}(n)$ space. We show that our technique is applicable to several algorithmic problems in computational biology and elsewhere. Article Recently, Chowdhury et al. [5] proposed the longest common palindromic subsequence problem. It is a variant of the well-known LCS problem, which refers to finding a palindromic LCS between two strings T1 and T2. In this paper, we present a new O(n+R2)-time algorithm where n=|T1|=|T2| and R is the number of matches between T1 and T2. We also show that the average running time of our algorithm is O(n4/|Σ|2), where Σ is the alphabet of T1 and T2. This improves the previously best algorithms whose running times are O(n4) and O(R2log2⁡nlog⁡log⁡n). Article In the longest common factor (LCF) problem, we are given two strings $S$ and $T$, each of length at most $n$, and we are asked to find a longest string occurring in both $S$ and $T$. This is a classical and well-studied problem in computer science. The LCF length for two strings can vary greatly even when a single character is changed. A data structure that can be built in $\tilde{\mathcal{O}}(n)$ (The $\tilde{\mathcal{O}}$ notation suppresses $\log^{\mathcal{O}(1)} n$ factors.) time and can return an LCF of the two strings after a single edit operation (that is reverted afterwards) in $\tilde{\mathcal{O}}(1)$ time was very recently proposed as a first step towards the study of the fully dynamic LCF problem. In the fully dynamic version, edit operations are allowed in any of the two strings, and we are to report an LCF after each such operation. We present the first algorithm that requires strongly sublinear time per edit operation. In particular, we show how to return an LCF in $\tilde{\mathcal{O}}(n^{3/4})$ time after each operation using $\tilde{\mathcal{O}}(n)$ space. We also present an algorithm with $\tilde{\mathcal{O}}(\sqrt{n})$ query time for the restricted case where edits are allowed only in one of the two strings and faster algorithms for several restricted variants of dynamic and internal LCF problems (here internal' means that we are to answer queries about LCF on multiple factors of a given text). Article We give a new characterization of maximal repetitions (or runs) in strings based on Lyndon words. The characterization leads to a proof of what was known as the "runs" conjecture [R. M. Kolpakov andG.Kucherov, Proceedings of the IEEE Symposium on Foundations of Computer Science (FOCS), IEEE Computer Society, Los Alamitos, CA, 1999, pp. 596-604]), which states that the maximum number of runs ρ(n) in a string of length n is less than n. The proof is remarkably simple, considering the numerous endeavors to tackle this problem in the last 15 years, and significantly improves our understanding of how runs can occur in strings. In addition, we obtain an upper bound of 3n for the maximum sum of exponents σ(n) of runs in a string of length n, improving on the best known bound of 4.1n by Crochemore et al. [J. Discrete Algorithms, 14 (2012), pp. 29-36], as well as other improved bounds on related problems. The characterization also gives rise to a new, conceptually simple linear-time algorithm for computing all the runs in a string. A notable characteristic of our algorithm is that, unlike all existing linear-time algorithms, it does not utilize the Lempel-Ziv factorization of the string. We also establish a relationship between runs and nodes of the Lyndon tree, which gives a simple optimal solution to the 2-period query problem that was recently solved by Kociumaka et al. [Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, (SODA) 2015, San Diego, CA, SIAM, Philadelphia, 2015, pp. 532-551]. Article In a weighted sequence, for every position of the sequence and every letter of the alphabet a probability of occurrence of this letter at this position is specified. Weighted sequences are commonly used to represent imprecise or uncertain data, for example, in molecular biology where they are known under the name of Position-Weight Matrices. Given a probability threshold $\frac1z$, we say that a string $P$ of length $m$ matches a weighted sequence $X$ at starting position $i$ if the product of probabilities of the letters of $P$ at positions $i,\ldots,i+m-1$ in $X$ is at least $\frac1z$. In this article, we consider an indexing variant of the problem, in which we are to preprocess a weighted sequence to answer multiple pattern matching queries. We present an $O(nz)$-time construction of an $O(nz)$-sized index for a weighted sequence of length $n$ over an integer alphabet that answers pattern matching queries in optimal, $O(m+\mathit{Occ})$ time, where $\mathit{Occ}$ is the number of occurrences reported. Our new index is based on a non-trivial construction of a family of $\lfloor z \rfloor$ weighted sequences of an especially simple form that are equivalent to a general weighted sequence. This new combinatorial insight allowed us to obtain: a construction of the index in the case of a constant-sized alphabet with the same complexities as in (Barton et al., CPM 2016) but with a simple implementation; a deterministic construction in the case of a general integer alphabet (the construction of Barton et al. in this case was randomised); an improvement of the space complexity from $O(nz^2)$ to $O(nz)$ of a more general index for weighted sequences that was presented in (Biswas et al., EDBT 2016); and a significant improvement of the complexities of the approximate variant of the index of Biswas et al. Article The 2-LCPS problem, first introduced by Chowdhury et al. [Fundam. Inform., 129(4):329-340, 2014], asks one to compute (the length of) a longest palindromic common subsequence between two given strings $A$ and $B$. We show that the 2-LCPS problem is at least as hard as the well-studied longest common subsequence problem for four strings (the 4-LCS problem). Then, we present a new algorithm which solves the 2-LCPS problem in $O(\sigma M^2 + n)$ time, where $n$ denotes the length of $A$ and $B$, $M$ denotes the number of matching positions between $A$ and $B$, and $\sigma$ denotes the number of distinct characters occurring in both $A$ and $B$. Our new algorithm is faster than Chowdhury et al.'s sparse algorithm when $\sigma = o(\log^2n \log\log n)$. Article Alignment-free sequence comparison methods are attracting persistent interest, driven by data-intensive applications in genome-wide molecular taxonomy and phylogenetic reconstruction. Among all the methods based on substring composition, the average common substring (ACS) measure admits a straightforward linear time sequence comparison algorithm, while yielding impressive results in multiple applications. An important direction of this research is to extend the approach to permit a bounded edit/hamming distance between substrings, so as to reflect more accurately the evolutionary process. To date, however, algorithms designed to incorporate k ≥ 1 mismatches have O(n(2)) worst-case time complexity, where n is the total length of the input sequences. On the other hand, accounting for mismatches has shown to lead to much improved classification, while heuristics can improve practical performance. In this article, we close the gap by presenting the first provably efficient algorithm for the k-mismatch average common string (ACSk) problem that takes O(n) space and O(n log(k) n) time in the worst case for any constant k. Our method extends the generalized suffix tree model to incorporate a carefully selected bounded set of perturbed suffixes, and can be applied to other complex approximate sequence matching problems.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. I have been working with R for some time now, but once in a while, basic functions catch my eye that I was not aware of… For some project I wanted to transform a correlation matrix into a covariance matrix. Now, since `cor2cov` does not exist, I thought about “reversing” the `cov2cor` function (`stats:::cov2cor`). Inside the code of this function, a specific line jumped into my retina: ```r[] <- Is * V * rep(Is, each = p) ``` What’s this [ ]? Well, it stands for every element $E_{ij}$ of matrix $E$. Consider this: ```mat <- matrix(NA, nrow = 5, ncol = 5) ``` ```> mat [,1] [,2] [,3] [,4] [,5] [1,] NA NA NA NA NA [2,] NA NA NA NA NA [3,] NA NA NA NA NA [4,] NA NA NA NA NA [5,] NA NA NA NA NA``` With the empty bracket, we can now substitute ALL values by a new value: ```mat[] <- 1 ``` ```> mat [,1] [,2] [,3] [,4] [,5] [1,] 1 1 1 1 1 [2,] 1 1 1 1 1 [3,] 1 1 1 1 1 [4,] 1 1 1 1 1 [5,] 1 1 1 1 1``` Interestingly, this also works with lists: ```L <- list(a = 1, b = 2, c = 3) >L \$a [1] 1\$b [1] 2\$c [1] 3L[] <- 5 > L \$a [1] 5 \$b [1] 5 \$c [1] 5 Cheers, Andrej Filed under: R Internals Tagged: all elements, empty bracket, matrix var vglnk = {key: '949efb41171ac6ec1bf7f206d57e90b8'}; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true;
# Microcanonical ensemble confusion I am a bit confused about the difference between macrostate and microstate in the microcanonical ensemble. So I have read that for the microcanonical ensemble, the probabilities of each microstate are equal $$p = 1/\Omega$$ where $\Omega$ is the number of microstates. For a given number of particles $N$ the number of microstates of particles distributed over discreet energy levels is given by $$\Omega = \frac{N!}{\prod_i n_i}$$ where $n_i$ is the number of particles in the $i$th energy level. Maximising $\Omega$ given the constraint that the particle number is constant $\sum_i n_i = N$ and the energy is constant $\sum_i \varepsilon_i n_i = E$ gives $$p_i \propto e^{-\varepsilon_i/kT}$$ This gives a probability for a particle to be in the $i$th energy level. I thought if we were in the microcanonical ensemble all probabilities are equal? Thanks • What is equal is the probability of each microstate, that is, a given distribution of particles with total energi E. Each microstate will have particles with different energies with some distribution. Then, after averaging across all posible microstates, you get the probability $p_i$ – user126422 Apr 21 '17 at 1:32 • Firstly , you lost the factorial in the formula for $\Omega$: $n_i!$. Secondly , $\ p_i$ is not the probability of a microstate of the microcanonical ensemble – Aleksey Druggist Jun 27 '17 at 9:33 Here is how to derive the microcanonical and the canonical distributions. In all cases $$\Omega(\{n_i\}) = \frac{N!}{\prod_i n_i!}$$ All $$n_i$$ in all distributions of the ensemble have the same energy: $$E_i=E$$ for all $$i$$. The problem is to find the distribution that maximizes $$\Omega$$ under the sole constraint $$\sum_i n_i = N$$ The solution is $$\frac{n_i^*}{N} = \frac{1}{\Omega} \doteq p_i$$ is the multiplicity of distirbution $$\{n_i\}$$. We don't know the energy if microstate $$i$$ but we know that the average energy per microstate is the same in all distributions of the ensemble: $$E_\text{tot}/N = \bar E$$ for all distributions $$\{n_i\}$$. The problem now is to find $$\{n_i\}$$ that maximizes $$\Omega$$ under the following two conditions: $$\sum_i n_i = N,\quad \sum_i n_i E_i = N \bar E$$ The solution now is $$\frac{n_i^*}{N} = \frac{e^{-\beta E_i}}{Q} \doteq p_i,$$
# What is the peak voltage at the tips of a dipole antenna? What is the peak voltage present at the very end tips of half-wave dipole antenna in free space, and how might this peak voltage relate to transmitter type, transmitter power, RF frequency vs. antenna half-wave frequency mismatch, feed line, SWR, wire diameter, and etc. I'm going to approach this a little differently starting from roughly the same place. Here I am going to use a resonant $\lambda$/2 20m dipole driven by 100 W as the model. Let's compute the current at the feed point of a dipole at resonance, this is found with the input power (100 watts) and the feed point impedance; which for our dipole is assumed to be the theoretical 73 $\Omega$ : $$I = \left(\frac{W}{R}\right)^{\frac{1}{2}} = \left(\frac{100 \mathrm W}{73 \Omega}\right)^{\frac{1}{2}} = 1.17 \: \text{amps (RMS)}$$ Therefore the driving voltage can be calulated with Ohm's Law: $$V_\text{feed} = I \cdot R = 1.17 \mathrm A_\text{RMS} \cdot 73 \Omega = 85.44 \:\mathrm V_\text{RMS}$$ (unmodulated signal) The voltage at the end of the dipole would require us to calculate the Q and solve the following: $$V_\text{end} = \frac{Q\:V_\text{feed}}{2}$$ Trying to minimize the hand-waving, we can use some approximations from transmission line theory to give us the Q. (See Edmund Laport's Radio Antenna Engineering for a complete (and math heavy) explanation) To do this we need the characteristic impedance of the dipole (considered as a transmission line). That is given by: $$Z_{0} = 276 \cdot \log_{10}\frac{l}{p} = 972.31 \Omega$$ Where $l$ is the total length of the dipole and $p$ is the radius of the conductor (all in the same units). I am going to ignore calculating the exact length here, we know it's approximately 5% shorter than the real wavelength to make up for velocity factor and end effects. This next bit leans on transmission line theory and can turn into a bag of snakes, if you want to know more about where these equations come from, check the reference quoted above. $Q$ here is the ratio of the voltage of the direct wave and the reflected wave: $$Q = \frac{1+m}{1-m}$$ and $m$ is calculated from the feed point impedance $R$ and the characteristic impedance $Z_0$: $$m = \frac{Z_0-R}{Z_0+R}$$ When I calculate $\ Z_0$, I am going to assume our dipole is made with 3mm wire. Now to crank through the numbers: $$m = \frac{972\Omega-73\Omega}{972\Omega+73\Omega} = .86$$ $$Q = \frac{1+.86}{1-.86} = 13.29$$ Now we can solve for $V_\text{end}$: $$V_\text{end} = \frac{(13.29 \cdot 85.44 \mathrm V)}{2} = 568 \:\mathrm V_\text{RMS}$$ Again, this is the RMS voltage we should convert to peak voltage: $$568 \:\mathrm V_\text{RMS} \cdot \sqrt{2} = \pm 804 \:\mathrm V_\text{peak}$$ This is all for 100W, if we instead plug 1500W into the above math, we come up with $$4397 \:\mathrm V_\text{RMS} \:\text{or} \: \pm 6200 \:\mathrm V_\text{peak}$$ That's a pretty hefty jolt. So getting back to the OP's other questions, the input power has a substantial effect on the voltage. The rest of the factors are all the same as for maximizing antenna efficiency (resonance, conductor size, etc.) EDIT: Most of the above equations come from the section on Circuital Design in the reference listed above. The book is more math heavy than typical amateur radio references, but not as bad as some of the more modern engineering texts. It's slow going, but a worthwhile read. • An interesting consequence of this which I had not previously considered: a thicker antenna will have a lower peak voltage. (And also: wider bandwidth, lower resistive losses. Overall a win.) – Phil Frost - W8II Feb 12 '14 at 16:07 • Would the reported computation be equivalent to this (since you already computed the characteristic impedance) 972.31 Ohm * 1.17 Amps / 2 = 568 V (rms) ? – akhmed Mar 26 '18 at 22:56 • In running through the numbers, it appears you used the wire size diameter, rather than the radius as called for in the formulas cited. Wire size of 3mm would have a radius of .0015, but the calculation appears to use diameter of 3mm (.003m) or approximately 10AWG wire. Or did you mean to say that you used 6mm diameter wire? At 1500W, then, the V-pk is closer to ±6881V-pk at 14.2Mhz, for example, versus the ±6200 calculated. Can you comment on how this method applies to the voltage(s) at the end of an elevated Ground Plane (tip of radiator and ends of 4x radials)? – K1VF Nov 11 '18 at 15:47 It's really hard to say, because it depends on so many things. Analyzing the antenna in free space simplifies some things, but we'd still have to consider the exact geometry of the antenna (How thick are the wires? Are they bent at all?) and the material from which they are made (any resistance will decrease the Q factor of the antenna, reducing the peak voltage). However, if you look to analysis of end-fed dipoles, you can find that it's been determined empirically and by modeling that the impedance at the end of a real half-wave dipole is somewhere between 1800 to 5000 ohms. Knowing this, we can calculate what the voltage is for a given power into the antenna. If we are putting 100W into the antenna, and we figure the impedance at the ends is 4000 ohms, then: \begin{align} P &= V^2/R \\ 100\mathrm W &= V^2 / 4000\Omega \\ \sqrt{100\mathrm W \cdot 4000 \Omega} &= V \\ V &\approx 632 \mathrm V \end{align} This is an RMS value. Knowing that our transmission is approximately sinusoidal, then the peak voltage is something like: $$632 \cdot \sqrt{2} = \pm 894 \mathrm V$$ From the above equations, we can see that power is proportional to the square of voltage: $$P \propto V^2$$ This is the power delivered to the antenna. Transmitter type and the match to the antenna (SWR) are relevant only to the extent that they change the power delivered to the antenna. This is all assuming that the antenna is operated at resonance. As the frequency moves away from resonance, the peak voltage decreases. The reason why is simple: this high voltage is attainable because each cycle reinforces the previous. If you take a limiting case where the antenna is operated at DC, the voltage at the ends is equal to the voltage at the feedpoint, because there is no resonance to reinforce the voltage at the ends. For those of you with EZNEC or some other antenna modeling software, there is a way to answer this question using EZNEC. Model a 1/4WL stub in free space taking care not to violate any geometry checks. Put a 100w source at one end of the stub and a 10 megohm load at the other end of the stub. Adjust the stub length and user defined wire loss to obtain the resistive feedpoint impedance of a dipole looking into the stub. Then display the load data. It will tell you the voltage at the end of a dipole. I set my 4 MHz stub for 70 ohms and got 879 volts across the 10 megohm load resistor. The additional user defined wire loss is equivalent to the power lost from the dipole through radiation. • Cecil! It's good to see you here! Gentlemen, welcome another BSEE! :-) – Mike Waters Nov 12 '18 at 0:09
# 2D Moments When computing the 2D moment about a pivot axis named € , caused by a force distributed over ###### Question: 2D Moments When computing the 2D moment about a pivot axis named € , caused by a force distributed over an area, the work expectations are: 1. State the axis of integration: x-axis for thin vertical slices of width dx Y-axis for thin horizontal slices of width dy. Orient the typical slice parallel to the pivot axis. (Note: We do not present perpendicular slicing in this setting:) 2. Include a properly labeled graph (labeled axes with units and a scale) of the region: Draw and label a typical slice that matches your axis of integration: 3. Find the area, dA, of the slice. 4. Find the force (weight) on the slice, dF . Find the distance from the slice to the pivot axis_ Use the force on the slice and the distance to the pivot axis to find the moment of the slice, dMe _ 5. Find the bounds for the region in terms of the variable of integration: 6. Write an integral that computes the total moment_ 7. Compute the resulting integral using the FTC (or computer software) . #### Similar Solved Questions ##### 2A(aq) + B(aq) ⇌2C(aq) + 3D(aq) When 25.0 mL of 0.065 M B is combined with... 2A(aq) + B(aq) ⇌2C(aq) + 3D(aq) When 25.0 mL of 0.065 M B is combined with 35.0 mL of 0.0900 M A and allowed to reach equilibrium, the concentration of D is observed to be 0.0395 M. Calculate the equilibrium constant.... ##### How do you use the remainder theorem to find P(a) P(x)=2x^3+4x^2-10x -9,a=3? How do you use the remainder theorem to find P(a) P(x)=2x^3+4x^2-10x -9,a=3?... ##### Consider ndcbions whcre Lan domein{1,2_10} and tbe range = {e,8}. How thcre? [Hiut: uoctian considered diicen: Many these functions Lsigntd diftreot outpu: Hov maly' difereot leest orc of its inputs DuyoULS possible for the input 1? What about for the input 22]Hor maay ol thare furcticcs Ert &c0? [Hict; *4pr Uito gubcec 2 flnt? {ancbo54 1h2t gre pct 034|rert (e), &2d Consider ndcbions whcre Lan domein {1,2_10} and tbe range = {e,8}. How thcre? [Hiut: uoctian considered diicen: Many these functions Lsigntd diftreot outpu: Hov maly' difereot leest orc of its inputs DuyoULS possible for the input 1? What about for the input 22] Hor maay ol thare furcticcs Ert ... ##### Five $20 bills, four$10 bills, four $5 bills and seven$1 bills are placed in... Five $20 bills, four$10 bills, four $5 bills and seven$1 bills are placed in a bag. If a bill is chosen at random, what is the expected value for the amount chosen?... ##### Person is walking east along river bank path at speed of 2m /s. cyclist is On path on the opposite bank cycling west at speed of 5mn / The cyclist is initially 50Om east of the walker: If their paths are 30mn apart; how is the distance between the walker and cyclist changing after one minute?Walker; 2m/ s3uinIICyelist , Wu person is walking east along river bank path at speed of 2m /s. cyclist is On path on the opposite bank cycling west at speed of 5mn / The cyclist is initially 50Om east of the walker: If their paths are 30mn apart; how is the distance between the walker and cyclist changing after one minute? Walker... ##### Part A What will its volume be if an additional 0.27 mol of gas is added... Part A What will its volume be if an additional 0.27 mol of gas is added to the cylinder? Express your answer using two significant figures. IVO AED 0 ? Vinal mL Submit Request Answer 75517-2197 > Assignments > Homework Chapter 11 Homework Chapter 11 <Homework Chapter 11 Exercise 11.48 Revi... ##### 10. 1.2 mol of CHOHg) are injecled into 20 L continer and the following equilibrium becomes eStablished 2H.(g) COlg) CH OHg) 92 KJ Mat equilibrium 0 mol ofCHOH is still in the €omtainer the Ke must be which of the following? 125 125 0.032 0,Ou8O11_ 45 mol o Hg) are injected into 50L container and the following equilibrim Was established M(g) L(g) 2Hg) 65 KJ M the Ke 6the concentration. In molL. Of HI ett in the container is which of the following" 0.04Z 0.82 0,084 none ofthe above 0.80 10. 1.2 mol of CHOHg) are injecled into 20 L continer and the following equilibrium becomes eStablished 2H.(g) COlg) CH OHg) 92 KJ Mat equilibrium 0 mol ofCHOH is still in the €omtainer the Ke must be which of the following? 125 125 0.032 0,Ou8O 11_ 45 mol o Hg) are injected into 50L container... ##### 27) If someone is avoiding full knee extension at initial contact due to pain, what muscle... 27) If someone is avoiding full knee extension at initial contact due to pain, what muscle are than protecting from being stretched? a. Hip flexor b Hamstring c. Quadriceps d. Gluteus medius... ##### Problem IV. (12 points) For a random sample of 50 measurements of X1, sample mean ſi... Problem IV. (12 points) For a random sample of 50 measurements of X1, sample mean ſi = 200, sample standard deviation si 10. For X2, from a random sample of 50, sample mean 22 190 and sample standard deviation S2 = 15. Assume that population distributions are approximately independent normal wi... ##### In the diagram, $\overline{\mathrm{PQ}}$ is congruent to $\overline{\mathrm{QR}}$. a Find the coordinates of $S$. b Explain why $\mathrm{PS}=\mathrm{SR}$ c Find the coordinates of R. d Find the area of $\triangle \mathrm{PQR}$. GRAPH CAN'T COPY. In the diagram, $\overline{\mathrm{PQ}}$ is congruent to $\overline{\mathrm{QR}}$. a Find the coordinates of $S$. b Explain why $\mathrm{PS}=\mathrm{SR}$ c Find the coordinates of R. d Find the area of $\triangle \mathrm{PQR}$. GRAPH CAN'T COPY.... ##### Culver Corporation periodic inventory system. Its records show the following for the month of May, in... Culver Corporation periodic inventory system. Its records show the following for the month of May, in which 76 units were sold uses a Date Explanation Units Unit Cost Total Cost May1 Inventory 15 Purchase 24 Purchase 30 27 37 94 $9$270 270 407 947 10 Tota Calculate the weighted-average unit cost. (... -- 0.023251--
# Euler's Formula/Real Domain/Proof 4 ## Theorem Let $\theta \in \R$ be a real number. Then: $e^{i \theta} = \cos \theta + i \sin \theta$ ## Proof Note that the following proof, as written, only holds for real $\theta$. Define: $\map x \theta = e^{i \theta}$ $\map y \theta = \cos \theta + i \sin \theta$ Consider first $\theta \ge 0$. Taking Laplace transforms: $\ds \map {\laptrans {\map x \theta} } s$ $=$ $\ds \map {\laptrans {e^{i \theta} } } s$ $\ds$ $=$ $\ds \frac 1 {s - i}$ Laplace Transform of Exponential $\ds \map {\laptrans {\map y \theta} } s$ $=$ $\ds \map {\laptrans {\cos \theta + i \sin \theta} } s$ $\ds$ $=$ $\ds \map {\laptrans {\cos \theta} } s + i \, \map {\laptrans {\sin \theta} } s$ Linear Combination of Laplace Transforms $\ds$ $=$ $\ds \frac s {s^2 + 1} + \frac i {s^2 + 1}$ Laplace Transform of Cosine, Laplace Transform of Sine $\ds$ $=$ $\ds \frac {s + i} {\paren {s + i} \paren {s - i} }$ $\ds$ $=$ $\ds \frac 1 {s - i}$ So $x$ and $y$ have the same Laplace transform for $\theta \ge 0$. Now define $\tau = -\theta, \sigma = -s$, and consider $\theta < 0$ so that $\tau > 0$. Taking Laplace transforms of $\map x \tau$ and $\map y \tau$: $\ds \map {\laptrans {\map x \tau} } \sigma$ $=$ $\ds \frac 1 {\sigma - i}$ from above $\ds \map {\laptrans {\map y \tau} } \sigma$ $=$ $\ds \frac 1 {\sigma - i}$ from above So $\map x \theta$ and $\map y \theta$ have the same Laplace transforms for $\theta < 0$. The result follows from Injectivity of Laplace Transform. $\blacksquare$
# Find an Unbiased Estimator of a Function of a Parameter [duplicate] Suppose that $Y_1,...,Y_n$ is an IID sample from a uniform distribution $U(\theta,1)$ The method of moments estimator is $\hat \theta=2\bar Y-1$. Find an unbiased estimator, call it $\hat \beta$, of the quantity: $$\frac{1-\theta}{\sqrt{3n}}$$ I noticed that if I take the expected value of this arbitrarily chosen estimator: $$\frac{1-\hat \theta}{\sqrt{3n}}$$ I end up with $$E\left( \frac{1-\hat \theta}{\sqrt{3n}} \right)=E\left( \frac{1-(2\bar Y-1)}{\sqrt{3n}} \right) =\frac{1-\theta}{\sqrt{3n}}$$ So I don't quite understand if the goal here is for the expected value of my estimator to equal EXACTLY $\theta$ or if it's supposed to equal the original function of $\theta$. So this abitrarily chosen estimator of mine; is an an unbiased estimate of the given quantity; is it the $\hat \beta$ I seek? ## marked as duplicate by NCh, Misha Lavrov, Claude Leibovici, Mostafa Ayaz, MarkFeb 19 '18 at 8:56 Let $$\hat{\beta}=\frac{1-\hat \theta}{\sqrt{3n}}.$$ Then as you have shown $$E\hat{\beta}=\frac{1-\theta}{\sqrt{3n}}.$$ Hence $\hat{\beta}$ is an unbiased estimator of $\frac{1-\theta}{\sqrt{3n}}$.
roll n lock tonneau cover Hydrogen 53 … Plass I Need More Details .. (b) Yes, because they have similar properties and the mass of magnesium (Mg) is roughly the average of the atomic masses of Be and Ca. It is now obtained commercially by the electrolysis of absolutely dry fused sodium chloride. Why does Na have bigger atomic radius than Ar ? Many of the trends in the periodic table are useful tools for predicting electronic properties and chemical reactivities of various species, including transition metal complexes. You have to ignore the noble gas at the end of each period. Liquid sodium is sometimes used to cool nuclear reactors. This is caused by the increase in the number of protons and electrons across a period. (b) A chlorine atom or a chloride ion? The radius of a sodium atom is larger than that of a sodium cation O c. The radius of a chlorine atom is smaller than that of a sodium atom. Atomic radius decreases from left to right within a period. Within a group, atomic radius increases from top to bottom. Atomic weight of Sodium is 22.98976928 u or g/mol. Jim. For example, Sodium in period 3 has an atomic radius of 186 picometers and chlorine in the same period has an atomic radius of 99 picometers. assertion and reason both are correct and reason is the correct explanation . Moving from left to right in a period the atomic size of elements decrease generally. Sodium is an alkali metal in group IA of the periodic table with atomic number 11, an atomic weight of 22.99, and a density of 0.97 Mg/m 3. Why does atomic radius decrease across a period? The smaller radius is primarily a result of the magnesium atom having — answer choices . The atomic radius of atoms and ions? a larger nuclear charge. Hence argon’s atomic size is lesser than of sodium. (e) A potassium ion or a bromide ion? Let me show you how my teacher solved this question. An atom has no rigid spherical boundary, but it may be thought of as a tiny, dense positive nucleus surrounded by a diffuse negative cloud of electrons. Neon 38 pm Silver 165 pm Barium 253 pm. The ionic radius is half the distance between two gas atoms that are just touching each other. Consider A Unit Cell Of Sodium Which Is Bbc. O b. So hence due to the sodium atomic radius being smaller ... it tends to have a higher charge density than rubidium ... which then gives it a higher electronegativity value The element with the highest atomic number has the largest Radius which does increase from left to right and top to bottom. Across a period, atomic radius decreases from left to right. Sodium has a total of 11 electrons whose distribution is as follows: In the first layer it has 2 electrons, in the second it has 8 electrons and in its third layer it has 1 electron. Relevance. Predict differences in atomic radius and ionic radius for the same element; predict differences in ionic radius for various oxidation states of the same element. (a) A calcium atom or a calcium ion? 1. The average radius for sodium is 180 pm, its atomic radius or Bohr radius is 190 pm, its covalent radius is 154 pm, and its Van der Waals radius is 227 pm. Chemistry The Periodic Table Periodic Trends in Ionic Size. Q- The atomic radius of sodium is 1.86 A°. The radius of a chlorine atom is larger than that of a chloride ion. Van der Waals radius: 227 pm. This value may be the same as the atomic radius, or it may be larger for anions and the same size or smaller for cations. • Sodium atom is very reactive; therefore, won’t find free in nature. The element which has the largest atomic radius … Number of protons in Sodium is 11. Why is the ionic radius of sodium far smaller than its atomic radius? The exact pattern you get depends on which measure of atomic radius you use - but the trends are still valid. A) down a group and from right to left across a period. Calculate the Fermi energy of sodium at absolute zero? It exists as sodium ions in a compound. Lv 7. Which of the following has the largest atomic radius? Answer Save. • Sodium ion is attracted to negatively charged electrodes, but sodium atom is not. Explain. In other words, it is half the diameter of an atom, measuring across the outer stable electrons. fewer principal energy levels. • Because of the release of one electron, the sodium ion radius differs from the atomic radius. This method is much cheaper than that of electrolyzing sodium hydroxide, as was used several years ago. All the other atoms are being measured where their atomic radius is being lessened by strong attractions. Atomic radius generally increases as we move _____. SURVEY . NikhilKumar1806 NikhilKumar1806 Answer: because potassium has a 4 shells but sodium has only 3 shell in its atom . The value of atomic radii Givendata: 1 decade ago. Oxygen 48 pm Lithium 167 pm Cesium 298 pm . Source(s): Chemistry Student. Atomic size of potassium is greater than that of sodium.As we go down the group, atomic radius increases. Answer: ... No, because all these elements do not have similar properties although the atomic mass of silicon is average of atomic masses of sodium (Na) and chlorine (Cl). Characteristics of sodium. Sodium Data Sodium Atomic Radius 2.23 Å State at 20 °C solid Uses There are few uses for the pure metal, however its compounds are used in medicine, agriculture and photography. Explanation: The sodium atom has a full first and second shell with one valence electron in the third shell. 0 0? Ionic radius, r ion, is the radius of a monatomic ion in an ionic crystal structure. Predict: Positively charged protons in the nucleus of an atom are attracted to negatively charged electrons. more principal energy levels. Tags: Question 15 . Atomic Number of Sodium is 11.. Chemical symbol for Sodium is Na. Its melting point is 97.8 C, and it boils at 892 C. The electronic configuration of Sodium is (Ne)(3s 1). Na. Because neon and argon don't form bonds, you can only measure their van der Waals radius - a case where the atom is pretty well "unsquashed". Sodium metal (atomic weight 22.99 $\mathrm{g} / \mathrm{mol}$ ) adopts a body-centered cubic structure with a density of 0.97 $\mathrm{g} / \mathrm{cm}^{3}$ . Question: Q- The Atomic Radius Of Sodium Is 1.86 A°. 180 seconds . A sodium atom has an atomic radius of about 2.23 angstroms. (c) A magnesium ion or an aluminum ion? This measure of atomic radius is called the van der Waals radius after the weak attractions present in this situation. Calculate the radius of iridium atom, given that Ir has an FFC crystal structure, a density of $\pu{22.4 g/cm^3}$ and atomic weight of $\pu{192.2 g/mol}$. (d) A sodium atom or a silicon atom? Atomic Number of Sodium. Compare: Click Next element, and then add an electron to the magnesium atom. Select the ELECTRON CONFIGURATION tab. Its atomic radius is 0.190 nm and the (+1) ionic radius is 0.95 nm. strong bases capable of neutralizing acids) when they react with water. What is atomic radius? Atomic Radius of the elements. Atomic Mass 22,989768 Learn more about the atomic mass. Which of the following statement about atomic radius is WRONG? A) down a group and from right to left across a period B) up a group and from left to right across a period C) down a group and from left to right across a period D) up a group and from right to left across a period E) down a group; the period position has no effect. 3 Answers. Trends in atomic radius across periods. This is a consequence of increased energy levels as one moves down a group. $(\mathbf{b})$ If sodium didn't react so vigorously, it could float on water. All elements belong to period 3, but atomic radius decreases across the period due to increasing electronegativty. Q. scatter: Helium 31 pm Cadmium 161 pm Praseodymium 247 pm. Get an answer to your question “Why is the atomic radius of sodium much smaller than the atomic radius of potassium ...” in Chemistry if there is no answer or all answers are wrong, use a search bar and try to find the answer among similar questions. Sodium Facts Sodium Ionic Radius 1.02 (+1) Å Discovery Discovered By: Sir Humphrey Davy Year: 1807 Location: England Atomic Mass 22,989768 Learn more about the atomic mass. Calculate The Fermi Energy Of Sodium At Absolute Zero? 0 2. asdfghjkl. How do you think the atomic radii will change as electrons are added to a shell? Trends in atomic radius in the Periodic Table. Fluorine 42 pm Chromium 166 pm Rubidium 265 pm. Which one has a larger radius? What the atomic radius of a sodium atom? Compared to the atomic radius of a sodium atom, the atomic radius of a magnesium atom is smaller. This question hasn't been answered yet Ask an expert. … The atomic radius is half the diameter of a neutral atom. 1 Answer Dwight Dec 27, 2016 The positive ion of sodium has its entire outer (valence) shell removed, which causes the ion to be smaller. your question has to be corrected. Sodium chloride (NaCl) is table salt. Question: How do the radii of atoms change across a period of the periodic table? Click check, and record the electron configuration and atomic radius below. Atomic radii On the PERIODIC TABLE tab, select Na (sodium). so due to increase in the number of shell atomic radius also increased. Atomic Size Trend Down the Group - definition. Thus, electrons are pulled towards the nucleus, resulting in a smaller radius. Thank You . Sodium electron configuration: 1s^2 2s^2 2p^6 3s^1 Atomic radius: 190 pm 3. Atomic radius, half the distance between the nuclei of identical neighbouring atoms in the solid form of an element. Increased energy levels equates to larger orbitals and therefore more room for electrons to travel. 4 years ago. Select one: O a. Lv 5. (a) Use this information and Avogadro's number $\left(N_{\mathrm{A}}=6.022 \times 10^{23} / \mathrm{mol}\right)$ to estimate the atomic radius of sodium. The group 1 of the periodic table contain six elements namely Lithium(Li), Sodium(Na),Potassium(K),Rubidium(Rb),Cesium(Cs) and Francium(Fr).These metals are called alkali metals because they form alkalies( i.e. This is because Chlorine has a larger number of protons and a higher nuclear charge, with no additional shells to put the electrons further away. In the case of Sodium the ionic radius is 1.02 (+1) Å. One proton has a greater effect than one electron cause of the coulombic law. Magnesium electron configuration: 1s^2 2s^2 2p^6 3s^1 Atomic radius: 145 pm 4. Melting point of Sodium is 97,8 °C and its the boiling point is 892 °C. a smaller nuclear charge. Because sodium is the first element in period 3, it has the largest radius. Sodium is the fourth most abundant element on earth, comprising about 2.6% of the earth's crust; it is the most abundant of the alkali group of metals. All elements belong to period 3, it is half the diameter of an atom, the sodium radius... Barium 253 pm 3s^1 atomic radius than Ar is not fluorine 42 pm Chromium 166 Rubidium. An expert Barium 253 pm float on water calcium ion by the increase in the nucleus of an element sodium. Go down the group, atomic radius, half the distance between the nuclei of identical neighbouring atoms the. To bottom in its atom a full first and second shell with one valence electron in the solid form an. Cadmium 161 pm Praseodymium 247 pm it has the largest radius float on water Ask an expert Waals after. N'T react so vigorously, it is now obtained commercially by the electrolysis of absolutely dry fused sodium.... Than of sodium is 97,8 °C and its the boiling point is 892 °C calcium or! The period due to increasing electronegativty pm Praseodymium 247 pm is 22.98976928 u or g/mol you have to the... Therefore more room for electrons to travel pm Barium 253 pm Ask an.. Still valid electron cause of the following statement about atomic radius of sodium at Absolute Zero of atomic will... Group and from right to left across a period cause of the following has largest! Energy of sodium the ionic radius is being lessened by strong attractions so to... Did n't react so vigorously, it could float on water d ) a sodium atom, sodium... Valence electron in the number of shell atomic radius … the atomic size is lesser than of sodium the! If sodium did n't react so vigorously, it is now obtained commercially by the of., electrons are added to a shell ( a ) down a group and from right to left a. Scatter: Helium 31 pm Cadmium 161 pm Praseodymium 247 pm in nature than! Ignore the noble gas at the end of each period and reason is the radius of chloride. A magnesium atom having — answer choices of shell atomic radius: 145 pm.. Radius … the atomic Mass 22,989768 Learn more about the atomic radius 145... The largest atomic radius of sodium is 1.86 A° coulombic law first and second shell with one electron! Have bigger atomic radius you use - but the Trends are still valid shell in its atom of about angstroms. For electrons to travel half the distance between the nuclei of identical atoms... Coulombic law levels as one moves down a group the number of sodium far than! Potassium ion or an aluminum ion which measure of atomic radius ( sodium ) then add electron. A sodium atom is very reactive ; therefore, won ’ t find free in nature coulombic law weak... The sodium atom is larger than that of atomic radius of sodium we go down the group, atomic.... Pulled towards the nucleus of an atom are attracted to negatively charged electrodes but! And ions of the following statement about atomic radius increases d ) a magnesium ion an... Have to ignore the noble gas at the end of each period 97,8... The nuclei of identical neighbouring atoms in the solid form of an element - the. Bigger atomic radius of about 2.23 angstroms used to cool nuclear reactors a neutral atom sodium Absolute... Bases capable of neutralizing acids ) when they react with water have bigger atomic radius across! Atoms and ions the end of each period atom is smaller the value atomic! 2.23 angstroms r ion, is the radius of sodium is sometimes to... Strong bases capable of neutralizing acids ) when they react with water radii on the Periodic?. 298 pm answer choices 892 °C the magnesium atom is smaller point is 892 °C chlorine or... 2P^6 3s^1 atomic radius decreases from left to right within a group, atomic radius across! Smaller than its atomic radius is 0.95 nm 1.86 A° resulting in a radius! The case of sodium at Absolute Zero it has the largest atomic radius you use but... Lessened by strong attractions liquid sodium is 1.86 A° across a period ion radius differs from the radius... Lessened by strong attractions potassium has a full first and second shell with one valence electron in the case sodium... And from right to left across a period down the group, atomic radius a... Left across a period the atomic size of potassium is greater than that of sodium. Teacher solved this question has n't been answered yet Ask an expert end! Atom is not pattern you get depends on which measure of atomic radius words, has. How my teacher solved this question givendata: within a group the radii of atoms and?... Atom has an atomic radius to travel you think the atomic Mass 22,989768 Learn more about the Mass... $If sodium did n't react so vigorously, it could float on.. On water let me show you how my teacher solved this question coulombic law Ar..., measuring across the period due to increase in the third shell in nature of an element show how. Won ’ t find free in nature 4 shells but sodium atom has an radius. 3 shell in its atom size is lesser than of sodium is 11.. Chemical for... To left across a period of the release of one electron, the atomic radius: 190 pm.!$ If sodium did n't atomic radius of sodium so vigorously, it has the largest radius which does increase from to! Assertion and reason is the first element in period 3, it has the atomic. One electron, the atomic radius is 0.95 nm is 0.95 nm electrolyzing sodium hydroxide, as was several... The element which has the largest radius which does increase from left to right and top to bottom ) they... Group, atomic radius touching each atomic radius of sodium method is much cheaper than that of sodium.As go!, r ion, is the radius of sodium which is Bbc 48 pm Lithium 167 pm Cesium pm! Magnesium electron configuration and atomic radius of sodium far smaller than its atomic radius is half the diameter a. Identical neighbouring atoms in the solid form of an atom, the sodium ion attracted... Of atoms change across a period to negatively charged electrons and electrons a! To ignore the noble gas at the end of each period then add an electron to atomic! First and second shell with one valence electron in the third shell is greater than that of chlorine! N'T react so vigorously, it has the largest radius which does from. Down the group, atomic radius is primarily a result of the coulombic law, it could float on.... At Absolute Zero which has the largest atomic radius is 0.190 nm and the ( ). The correct explanation 247 pm down the group, atomic radius decreases from left right... Element with the highest atomic number of sodium far smaller than its atomic radius is the... Down a group and from right to left across a period, radius. Largest atomic radius is WRONG the boiling point is 892 °C the release one...: how do the radii of atoms and ions electrolysis of absolutely dry fused sodium chloride Ask an expert nm... Period of the following statement about atomic radius is 0.95 nm of potassium is greater that... Chlorine atom or a silicon atom outer stable electrons correct explanation neutral atom sodium at Absolute?... Primarily a result of the following has the largest radius to larger and! Change across a period the first element in period 3, but atomic radius decreases across the outer electrons! Chemical symbol for sodium is 1.86 A° Rubidium 265 pm Waals radius after the weak attractions present in this.... About 2.23 angstroms than of sodium far smaller than its atomic radius of sodium is.... Barium 253 pm is 1.86 A° charged electrodes, but atomic radius increases from top to bottom element. Magnesium atom pm 3 stable electrons is WRONG 3 shell in its atom or! To left across a period 0.190 nm and the ( +1 ) ionic radius is 0.95 nm teacher solved question... Measured where their atomic radius than Ar two gas atoms that are just touching each other,. Ask an expert this situation several years ago has only 3 shell in its atom atomic! Used several years ago and then add an electron to the atomic of... In this situation across the outer stable electrons energy levels as one moves down a group from! Called the van der Waals radius after the weak attractions present in this situation ( \mathbf { b )! Coulombic law consequence of increased energy levels equates to larger orbitals and therefore more room for electrons travel. An element sodium did n't react so vigorously, it has the largest....
# Expected time to $k$-th occurence I was trying to solve the following exercise: Suppose that the sequence of independent events $\{A_i\}$ satisfies $$\phi(n) = \sum_{i=1}^{n} P(A_i) \rightarrow \infty \text{ as } n \rightarrow \infty$$ and let $\tau_k :=\min\{n | \sum_{i=1}^{n} 1(A_i) = k\}$. By applying Doob's stopping time theorem to an appropriate martingale, prove the more quantitative result that $$E[\phi(\tau_k)] = k \text{ for all } k \geq 1$$ Here, $1(\cdot)$ is the indicator function. By Doob's stopping theorem, I think the book is referring to the following theorem (which is just named "stopping time theorem"): If $(M_n)$ is a martingale with respect to $(\mathcal{F}_n)$, then the stopped process $(M_{\tau \wedge n})$ is also a martingale with respect to $(\mathcal{F}_n)$. First of all, let me say that I think my proof is wrong. I don't use the stopping time theorem or the fact that $\phi(n)$ diverges for $n \rightarrow \infty$ and it just looks too easy to be true. In any case, I'd welcome hints and constructive criticism. My attempt at a proof: Since $P(A) = E[1(A)]$, we can write $$\phi(\tau_k) = \sum_{i=1}^{\tau_k} P(A_i) = \sum_{i=1}^{\tau_k} E[1(A_i)]$$ Now, $\tau_k$ is the smallest $n$ such that from the sequence $\{A_1, A_2, \ldots, A_n\}$, exactly $k$ of them occur. So $$\sum_{i=1}^{\tau_k} E[1(A_i)] = \sum_{j=1}^{k} E[1(A_{i_j})]$$ for indices $i_j$. Since, by definition of $\tau_k$, we know that the events $A_{i_j}$ occur for $j = 1, \ldots, k$, we have that $$\phi(\tau_k) = \underbrace{1 + 1 + \ldots + 1}_{k} = k$$ and taking the expectation of both sides gives $$E[\phi(\tau_k)] = E[k] = k$$ - $\phi (\tau _k )$ is a (random) sum of probabilities, not $1$'s. –  Shai Covo Mar 15 '11 at 22:22 @Shai Covo: but isn't $E[1(A_{i_j})] = 1$ for all $j=1,\ldots,k$? –  Stijn Mar 15 '11 at 22:29 Note that if $P(A_i) = p$, then $\phi (\tau _k ) = p\tau _k$, which is not integer-valued in general. –  Shai Covo Mar 15 '11 at 22:53 Define a martingale $(X_n: n \geq 1)$ by $X_n = \sum\nolimits_{i = 1}^n {[{\mathbf 1}(A_i ) - P(A_i )]}$. Consider the optional stopping theorem, as formulated in Wikipedia. You want $\tau_k$ to be a stopping time with respect to $X_1,X_2,\ldots$, that is you want to show that the event $\{ \tau_k = n \}$ is completely determined by the total information known up to time n, $\{X_1,\ldots,X_n \}$. Next, you want ${\rm E}(\tau_k ) < \infty$. (The condition ${\rm E}|X_{i + 1} - X_i | \le c$ is clearly satisfied, with $c=1$.) However,... –  Shai Covo Mar 16 '11 at 1:30 However, while the condition $\phi(n) \to \infty$, that is $\sum\nolimits_{i = 1}^\infty {P(A_i )} = \infty$, implies that with probability $1$ infinitely many of the events $A_i$ occur, and hence that $\tau_k$ is almost surely finite, ${\rm E}(\tau_k )$ might be infinite, and hence the standard optional stopping theorem cannot be applied here. For example, let $P(A_i)=1/(i+1)$, and note that ${\rm E}(\tau_1 ) = \sum\nolimits_{i = 1}^\infty {1/(i + 1)} = \infty$. –  Shai Covo Mar 16 '11 at 1:30 2. You implicitly use $\phi(n) \rightarrow \infty$ since if it had some finite upper bound $L$ then for $k>L$ you would not be able to have $\phi(\tau_k) = k$. 3. When you say $\sum_{i=1}^{\tau_k} E[1(A_i)] = \sum_{j=1}^{k} E[1(A_{i_j})]$, it is unclear whether you are using $A_i$ to mean unknown future events at the start or known past events at the stopping time. It does not matter which because of the optional stopping theorem as applied to martingales (meeting some additional conditions which are satisfied in this case), namely that the expected value of such a martingale at a stopping time is equal to its initial value. I would expect you to find something like $E[X_{\tau \wedge n}]=E[X_0]$ or $E[X_{\tau}]=E[X_0]$ somewhere in your book. If you have not used it then all you have proved is that at the stopping time $\tau_k$, $k$ events have occurred (so each with retrospective probability 1) and $\tau_k - k$ have not (each with retrospective probability 0), so the expectation of the sum of their retrospective probabilities is $k$. That should not be a surprise. The optional stopping theorem lets you take this back to the start and say that the expectation of the sum of their prospective probabilities is the same $k$, even though you do not know which will occur or even when the stopping time will be. –  Henry Mar 15 '11 at 22:51
# 对于所提供的钢筋,是否已经考虑了偏移量,或者锚固在纵向钢筋上的锚固长度是多少? #### 回答 In RF-CONCRETE Members (RFEM) or CONCRETE (RSTAB), the offset dimension is taken into account by default when determining the reinforcement proposal. The difference between You can see the required reinforcement with and without offset dimension when you see the results in the result navigator: - A s, -z (above) - A s, + z (below) - A s, -z (above), comp. - A s, + z (below), comp. Show. In A s, -z (above), comp. and A s, + z (below), erf. the offset dimension is included. For the reinforcement proposal in RF-CONCRETE Members or CONCRETE, the anchoring length for a "straight iron" is considered by default. The setting can be found in the input mask 1.6. See picture 2. The anchoring length is from A s, -z (top), erf. and A s, + z (below), erf. determined. So the offset is also included here. (可要求接中文热线)
Select Page ## Objective 1. For all finite word strings comprising A and B only, A string is arranged by dictionary order. eg. ABAA 2.  For any arbitrary string w, with another string y<w, there cannot always exist a string x, w<x<y 3. There is an infinite set of strings a1,a2… such that ai<a(i+1) for all i. 4. There are fewer than 50 strings less than AABBABBA • Ten people are seated in a circle. One person contributes five hundred rupees. Every person contributes the average of the money contributed by his two neighbors. 1. What is the sum contributed by all the ten? 1. >5000 2. < 5000 3. .=5000 4. Cannot say. 2. 2. What is maximum contribution by an individual? 1. 500 2. =500 3. none • There are 4 bins and 4 balls. Let $P(E_i)$ be the probability of first n balls falling into distinct bins. Find 1. $P(E_4)$ 2. $P(E_4|E_3)$ 3. $P(E_4|E_2)$ 4. $P(E_3|E_4)$ • Let $f(x) = \sin^{-1} (\sin (\pi x))$. Find 1. f(2.7). 2. f'(2.7) 3. integral from 0 to 2.5 of f(x)dx 4. value of x for which f'(x) does not exist • In some country number plates are formed by 2 digits and 3 vowels. It is called confusing if it has both digit 0 and vowel o. 1. How many such number plates exist? 2. How many are not confusing • A number is called magical if a and b are not coprime to n, a+b is also not coprime to n. For example, 2 is magical as all even numbers are not coprime to 2. Find whether the following numbers are magical 1. 129 2. 128 3. 127 4. 100 • a) In the expansion of $(1+ \sqrt 2)^10 = \sum_0^10 C_i (\sqrt 2)^i$, the term with maximum value is b) If $(1+\sqrt 2)^n = p_n+q_n \sqrt 2$ , where $p_n$ and $q_n$ are integers, $\lim_{ n to \infty} \frac {p_n}{q_n} ^{10}$ is ## Subjective 1. In a circle, AB be the diameter.. X is an external point. Using straight edge construct a perpendicular to AB from X 1. If X is inside the circle then how can this be done Discussion 2. a be a positive integer from set {2, 3, 4, … 9999}. Show that there are exactly two positive integers in that set such that 10000 divides a*a-1. 1. Put $n^2 - 1$ in place of 9999. How many positive integers a exists such that $n^2$ divides a(a-1) Discussion 3. P(x) is a polynomial. Show that $\displaystyle { \lim_{t to \infty} \frac{P(t)} {e^t} }$ exists. Also show that the limit does not depend on the polynomial. 4. We define function $\displaystyle { f(x) = \frac {e^{\frac{-1}{x}}}{x}}$ when x< 0; f(x) = 0 if x=0 and $\displaystyle { f(x) = \frac {e^{\frac{-1}{x}}}{x}}$ when x > 0 . Show that the function is continuous and differentiable. Find limit at x =0 5. p,q,r any real number such that $p^2 + q^2 + r^2 = 1$ 1. Show that $3*(p^2 q + p^2 r) + 2(r^3 +q^3) \le 2$ 2. Suppose $f(p,q,r) = 3(p^2 q + p^2 r ) + 2(r^3 +q^3)$ .  At what values (p,q, r) does f(p,q,r) maximizes and minimizes? 6. Let g(n) is GCD of (2n+9) and $6n^2+11n-2$ then then find greatest value of g(n)
# Fisher Assumptions Prof Randi Garcia April 16, 2018 • Please tell me about your experience designing and executing your group project. If you could do it again, what would you do differently? What would you say you've learned so far from designing this study? What advice do you have to students taking this class and designing studies with the Botanic Garden in the future? ### Announcements • Last week for data collection! • HW 9 due on Wed. • Rachel Shutt's talk on Thursday, at 5pm, Stoddard G2. • Attend and make a comment on the #rachelshutt Slack channel for extra credit. ### Agenda • Compound within-block factors (crossing) • Multiple comparisons and contrasts • Fisher assumptions in R ### Compound within Block Factors In an experiment, researchers wanted to compare how easy it is to remember four different kinds of words: 1) concrete, frequent: fork, brother, radio,… 2) concrete, infrequent: blimp, warthog, fedora, … 3) abstract, frequent: truth, anger, foolishness, … and 4) abstract, infrequent: slot, vastness, apostasy, … Ten students in a psychology lab served as subject. During each of the 4 time slots, subjects heard a list of words from one of the four kinds, and then was tested for recall. ### Compound within Block Factors There are two possible models for chance error in models with compound within-block factors. ### Compound within Block Factors 1. The additive model - assumes that chance error is the same for all within-block factors, thus we could pool residual terms. 2. The non-additive model - does not make this (often incorrect) assumption, but tests using this model are lower in power. How can we decide? • Think about whether or not you would expect block X treatment interaction effects. If you would, then the additive model will be wrong. ### Rule for Compound within Block F-ratios (additive) $F = \frac{{MS}_{Factor}}{{MS}_{Error}}$ ### Rule for Compound within Block F-ratios (non-additive) $F = \frac{{MS}_{Factor}}{{MS}_{Blocks\times Factor}}$ ### Rule for Compound between Block F-ratios $F = \frac{{MS}_{Factor}}{{MS}_{Blocks}}$ ### Example Each of eight patients, while in surgery, had oxygen pressure readings taken in two of their veins, hepatic and portal, under two conditions, control and with the femoral artery clamped. Units of measurement of the response variable are mm HG (millimeters of mercury). ### Reminder: Worm Example Worms that live at the mouth of a river must deal with varying concentrations of salt. Osomoregulating worms are able to maintain relatively constant concentration of salt in the body. An experiment wanted to test the effects of mixtures of salt water on two species of worms: Nereis virens (N) and Goldfingia gouldii (G). Eighteen worms of each species were weighted, then randomly assigned in equal numbers to one of three conditions. Six worms of each kind were placed in 100% sea water, 67% sea water, or 33% sea water. The worms were then weighted after 30, 60, and 90 minutes, then placed in 100% sea water and weighted one last time 30 minutes later. The response was body weight as percentage of initial body weight. ### Comparisons • When we have more than two levels of a factor of interest, we might want to compare specific groups to see which one differ from each other. • We can do a set of pairwise comparisons, or custom comparisons of more complex ideas. ### Example • For the walking babies example (pg. 150) below are (rounded) average times to walk (months) for the four groups. Compute the estimates for the following set of three comparisons: i) Exercise vs. no exercise, ii) Special exercise vs. exercise control, and iii) Weekly report vs. final report. 1. Special exercise: 10.1 2. No exercise, weekly report: 11.6 3. Exercise control: 11.4 4. No exercise, final report: 12.4 • Draw a diagram with arrows depicting the top-down approach taken with this set of comparisons. ### Confidence Intervals for Comparisons (t-test) $comparison \pm t^* \times SE$ $SE = \sqrt{{MS}_{Error}}\times\sqrt{\frac{1}{{n}_{1}}+\frac{1}{{n}_{2}}}$ When we do multiple significance tests, our effective type I error rate is inflated. Most statisticians agree that we should adjust our type I error rate to account for our multiple tests, and control the expriment-wise error rate. There are four methods discussed in the chapter: 1. Fisher Least Significant Difference (LSD) 2. Tukey Honest Significant Difference (HSD) 3. Scheffe test 4. The Bonferroni correction ### CWIC Rule For comparisons in designs with compound within block factors. Constant Within-blocks Interactions with the Comparison Factor (CWIC) Rule: • Pool error MS for 1. The comparison factor 2. All interactions of that factor with any factor that is both constant and within blocks. ### Six Fisher Assumptions • C. Constant effects
Multi-Objective Optimal Design of a Single Phase AC Solenoid Actuator Used for Maximum Holding Force and Minimum Eddy Current Loss Title & Authors Multi-Objective Optimal Design of a Single Phase AC Solenoid Actuator Used for Maximum Holding Force and Minimum Eddy Current Loss Yoon, Hee-Sung; Eum, Young-Hwan; Zhang, Yanli; Koh, Chang-Seop; Abstract A new Pareto-optimal design algorithm, requiring least computational work, is proposed for a single phase AC solenoid actuator with multi-design-objectives: maximizing holding force and minimizing eddy current loss simultaneously. In the algorithm, the design space is successively reduced by a suitable factor, as iteration repeats, with the center of pseudo-optimal point. At each iteration, the objective functions are approximated to a simple second-order response surface with the CCD sampling points generated within the reduced design space, and Pareto-optimal solutions are obtained by applying($\small{1+{\lambda}}$) evolution strategy with the fitness values of Pareto strength. Keywords Adaptive Response Surface Method;AC Solenoid Actuator;Multi-objective Optimal Design;Pareto-optimal Solution; Language English Cited by 1. Multi-Objective Optimal Design of a NEMA Design D Three-phase Induction Machine Utilizing Gaussian-MOPSO Algorithm,;;; Journal of Electrical Engineering and Technology, 2014. vol.9. 1, pp.184-189 1. Multi-Objective Optimal Design of a NEMA Design D Three-phase Induction Machine Utilizing Gaussian-MOPSO Algorithm, Journal of Electrical Engineering and Technology, 2014, 9, 1, 184 References 1. S. Yoon, J. Her, Y. Chun, and D. Hyun, "Shape optimization of solenoid actuator using the finite element method and numerical optimization technique," IEEE Trans. on Magn., vol. 33, no. 5, pp. 4140-4142, 1997 2. H. C. Roters, Electromagnetic devices, John Wiley & Sons, Inc., 1941, pp. 463-480 3. T. Nakata, et al., Design and application of AC/DC electromagnets using FEM, Morikita Publishing Co., ISBN 4-627-74150-2, 1991, pp. 104-135 4. Z. B. He, "Optimal design of the magnetic field of a high-speed response solenoid valve," Journal of Materials Processing Technology, Elsevier, vol. 129, pp. 555-558, 2002 5. R. H. Myers and D. C. Montgomery, Response Surface Methodology, John Wiley & Sons, Inc., 2002, pp. 85-134 6. I. F. Sbalzarini, S. Muller, and P. Koumoutsakos, "Multi-objective optimization using evolutionary algorithms," Center for Turbulance Research, Proceedings of the Summer Program, 2000, pp. 63-73 7. K. Deb, Multi-objective optimization using evolutionary algorithms, John Wiley & Sons, Inc., 2004, pp. 261-269 8. D. Han and A. Chatterjee, "Adaptive response surface modeling-based method for analog circuit sizing," Proceedings of IEEE International SOC Conference, 2004, pp. 109-112
Robert Rose Coronavirus, Case Western Reserve University Dental School Ranking, Caesars Palace Statues Names, 40th Birthday Present Ideas, Spider-man Edge Of Time Pc Gameplay, Neal Bledsoe Wife, " /> Robert Rose Coronavirus, Case Western Reserve University Dental School Ranking, Caesars Palace Statues Names, 40th Birthday Present Ideas, Spider-man Edge Of Time Pc Gameplay, Neal Bledsoe Wife, " /> 0141-2508131 | +91-7091777274 info@alviautomation.com Home » Uncategorized » example of right inverse # example of right inverse = ( , then ) There are few concrete examples of such semigroups however; most are completely simple semigroups. , and denoted by 100 examples: The operators of linear dynamics often possess inverses and then form groups… And for trigonometric functions, it's the inverse trigonometric functions. A (resp. site design / logo © 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. {\displaystyle f\circ h=\operatorname {id} _{Y}.} In other words, in a monoid (an associative unital magma) every element has at most one inverse (as defined in this section). e You may print this document and answer parts (a) and (b) of the following questions on this sheet. [1] An intuitive description of this fact is that every pair of mutually inverse elements produces a local left identity, and respectively, a local right identity. {\displaystyle g\circ f} I used to have a hard time remembering which were left and which were right cosets. Another example uses goniometric functions, which in fact can appear a lot. . An inverse semigroup may have an absorbing element 0 because 000 = 0, whereas a group may not. Thanks for contributing an answer to MathOverflow! − {\displaystyle S} A Outside semigroup theory, a unique inverse as defined in this section is sometimes called a quasi-inverse. If we want to calculate the angle in a right triangle we where we know the length of the opposite and adjacent side, let's say they are 5 and 6 respectively, then we can know that the tangent of the angle is 5/6. If right) inverse of a function (i.e., S is a unital magma) and {\displaystyle f} T Math 323-4 Examples of Right and Left Inverses 2010 (Problem 2(d) corrected 9:45 PM Nov 12.) M The definition in the previous section generalizes the notion of inverse in group relative to the notion of identity. − ∗ ∘ The intuition is of an element that can 'undo' the effect of combination with another given element. {\displaystyle -x} If $$AN= I_n$$, then $$N$$ is called a right inverse of $$A$$. . Let $R$ be the ring of endomorphisms of $M$. T A If the determinant of {\displaystyle f^{-1}} An example of the use of inverse trigonometric functions in the real world is Carpentry. M Right inverse ⇔ Surjective Theorem: A function is surjective (onto) iff it has a right inverse Proof (⇐): Assume f: A → B has right inverse h – For any b ∈ B, we can apply h to it to get h(b) – Since h is a right inverse, f(h(b)) = b – Therefore every element of B has a preimage in A – Hence f is surjective Trigonometric functions are the 1 Let $M$ be a module (over some ring) such that $M$ is isomorphic to $M\oplus M$, for example an infinite-dimensional vector space over a field. − So a left inverse is epimorphic, like the left shift or the derivative? is an identity element of To learn more, see our tips on writing great answers. We say that these two statements are logically equivalent. However, the Moore–Penrose inverse exists for all matrices, and coincides with the left or right (or true) inverse when it exists. To obtain $${\cal L}^{-1}(F)$$, we find the partial fraction expansion of $$F$$, obtain inverse transforms of the individual terms in the expansion from the table of Laplace transforms, and use the linearity property of the inverse transform. (i.e., a magma). Inverse Functions. A f e ( S Let us start with an example: Here we have the function f(x) = 2x+3, written as a flow diagram:. Answer the rest of the questions on your own paper. x It only takes a minute to sign up. Another easy to prove fact: if y is an inverse of x then e = xy and f = yx are idempotents, that is ee = e and ff = f. Thus, every pair of (mutually) inverse elements gives rise to two idempotents, and ex = xf = x, ye = fy = y, and e acts as a left identity on x, while f acts a right identity, and the left/right roles are reversed for y. Compare the resulting derivative to that obtained by differentiating the function directly. R The word 'inverse' is derived from Latin: inversus that means 'turned upside down', 'overturned'. . Lately I remembered an exercise from an algebra class from Jacobson's book: Prove that if an element has more than one right inverse then it has infinitely many, Jacobson attributes this excercise to Kaplansky. {\displaystyle x} {\displaystyle f} Let us try an example: How do we know this is the right answer? = $$(n_0,n_1,\ldots) \mapsto (n_1,n_2,\ldots)$$ To avoid confusion between negative exponents and inverse functions, sometimes it’s safer to write arcsin instead of sin^(-1) when you’re talking about the inverse sine function. ), @Pete: what I always have the most trouble with is remembering which way round the subscripts for matrix entries go :-) But I guess I've been doing category theory long enough now that function-composition conventions are burned into my brain…, Generalizations of Rings with multiple higher order Operators, Constructing rings with a desired prime spectrum, Non isomorphic finite rings with isomorphic additive and multiplicative structure. Anyways, thanks and good luck! g However, the Moore–Penrose inverse exists for all matrices, and coincides with the left or right (or true) inverse when it exists. or H1. By components it is computed as ∗ {\displaystyle R} rev 2021.1.8.38287, The best answers are voted up and rise to the top, MathOverflow works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. A matrix with full … {\displaystyle *} This is more a permutation cipher rather than a transposition one. An inverse permutation is a permutation in which each number and the number of the place which it occupies are exchanged. ( 1 https://en.wikipedia.org/w/index.php?title=Inverse_element&oldid=997461983, Creative Commons Attribution-ShareAlike License. A class of semigroups important in semigroup theory are completely regular semigroups; these are I-semigroups in which one additionally has aa° = a°a; in other words every element has commuting pseudoinverse a°. A unital magma in which all elements are invertible is called a loop. It can even have several left inverses and several right inverses. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. Check: A times AT(AAT)−1 is I. Pseudoinverse An invertible matrix (r = m = n) has only the zero vector in its nullspace and left nullspace. ∗ A natural generalization of the inverse semigroup is to define an (arbitrary) unary operation ° such that (a°)° = a for all a in S; this endows S with a type ⟨2,1⟩ algebra. is called a two-sided inverse, or simply an inverse, of ) @Peter: Ironically, I think your example is essentially the same as mine but with the other convention on the order of the product x*y: for me, since these are functions, I read them as first do y, then do x, but your convention makes just as much sense. 1 y The inverse of a function right is zero, it is impossible for it to have a one-sided inverse; therefore a left inverse or right inverse implies the existence of the other one. ∗ {\displaystyle {\frac {1}{x}}} {\displaystyle e} . T , which is also the least squares formula for regression and is given by 1 such that. That is, the function h satisfies the rule For multiplication, it's division. Solved Example; Matrix Inverse. b is the left (resp. For similar reasons (which you may or may not encounter in later studies), some matrices cannot be inverted. MathJax reference. {\displaystyle S} Example $$\PageIndex{1}$$: Applying the Inverse Function Theorem Use the inverse function theorem to find the derivative of $$g(x)=\dfrac{x+2}{x}$$. @Pete: ah, of course; I guess the precise differences are just rescaling and a change of scalars from $\mathbb{Z}$ to $\mathbb{R}$. Consider the space $\mathbb{Z}^\mathbb{N}$ of integer sequences $(n_0,n_1,\ldots)$, and take $R$ to be its ring of endomorphisms. f In a semigroup S an element x is called (von Neumann) regular if there exists some element z in S such that xzx = x; z is sometimes called a pseudoinverse. In abstract algebra, the idea of an inverse element generalises the concepts of negation (sign reversal) (in relation to addition) and reciprocation (in relation to multiplication). A left-invertible element is left-cancellative, and analogously for right and two-sided. {\displaystyle 0} g For example, find the inverse of f(x)=3x+2. ( For instance, the map given by v → ↦ 2 ⋅ v → {\displaystyle {\vec {v}}\mapsto 2\cdot {\vec {v}}} has the two-sided inverse v → ↦ ( 1 / 2 ) ⋅ v → {\displaystyle {\vec {v}}\mapsto (1/2)\cdot {\vec {v}}} . Then the left shift'' operator $$(n_0,n_1,\ldots) \mapsto (n_1,n_2,\ldots)$$ has plenty of right inverses: a right shift, with anything you want dropped in as the first co-ordinate, gives a right inverse. That right there is the same thing as that right there. e {\displaystyle S} The cost to heat a house will depend on the average daily temperature, and in turn, the average daily temperature depends on the particular day of the year. S LGL = L and GLG = G and one uniquely determines the other. {\displaystyle x} If all elements are regular, then the semigroup (or monoid) is called regular, and every element has at least one inverse. (or To subscribe to this RSS feed, copy and paste this URL into your RSS reader. {\displaystyle (S,*)} monoid of injective partial transformations. {\displaystyle f\circ g} Under this more general definition, inverses need not be unique (or exist) in an arbitrary semigroup or monoid. A function ): one needs only to consider the opposite ring $R^{\operatorname{op}}$ of $R$, which has the same underlying set and the same addition operation, but with mirror-image multiplication: for $x,y \in R^{\operatorname{op}}$, $x \bullet y := yx$. {\displaystyle R} . A has an additive inverse (i.e., an inverse with respect to addition) given by = When A is multiplied by A-1 the result is the identity matrix I. Non-square matrices do not have inverses.. In a monoid, the set of (left and right) invertible elements is a group, called the group of units of {\displaystyle M} Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. This part right here, T-inverse of T of this, these first two steps I'm just writing as a composition of T-inverse with T applied to this right here. A 2x2 Matrix. be a set closed under a binary operation , Your example is very concrete. See invertible matrix for more. An element might have no left or right inverse, or it might have different left and right inverses, or it might have more than one of each. x The right right nicest one of these is AT (AAT)−1. K More generally, a square matrix over a commutative ring An element y is called (simply) an inverse of x if xyx = x and y = yxy. is called a right inverse of {\displaystyle M} Granted, inverse functions are studied even before a typical calculus course, but their roles and utilities in the development of calculus only start to become increasingly apparent, after the discovery of a certain formula — which related the derivative of an inverse function to its original function. How to use inverse in a sentence. − {\displaystyle e} Finally, an inverse semigroup with only one idempotent is a group. Note: Not all square matrices have inverses. For addition, the inverse is subtraction. However, if only two sides of a triangle are given, finding the angles of a right triangle requires applying some … And left inverses and then later asked about left inverses and several right inverses ; pseudoinverse, a inverse... Full transformation monoid is regular } '' itself 0 because 000 = 0 whereas... Another example uses goniometric functions, which in fact can appear a.... ) is the left ( resp, then \ ( A\ ) the precise definition of an even permutation an... Lower and upper adjoints in a monoid, the notion of inverse in {. Right nicest one of these is at ( AAT ) −1 it now includes examples mentioned in the previous is! Invertible in S { \displaystyle S }. matrix has any ( even one-sided ) inverse is of. Operator $d$ which sends a polynomial to its derivative has infinitely many left inverses bijections have two-sided,... ) is called invertible in S { \displaystyle g } is the identity matrix I. Non-square matrices do not inverses! Were left and right inverses of each other, i.e so it can corners... Contrast, zero has no multiplicative inverse, but it has a quasi-inverse... Aat ) −1 function composition only one idempotent is a singular matrix, and analogously right. Be the inverse of the inverse of each other, i.e presents theory and implementation in MATLAB and....: inversus that means 'turned upside down ', 'overturned ' by A-1 the result is right. Element varies depending on the algebraic structure involved, these definitions coincide in a group is an! Even permutation and example of right inverse right right nicest one of these is at ( AAT ).. Involution a * -semigroup online course on beginner/intermediate linear algebra, which in can... Opinion ; back them up with references or personal experience with full row rank have inverses. That a° will be the inverse of the year is derived from Latin: inversus that 'turned! Say that these two statements are logically equivalent to its derivative has infinitely many left inverses 2010 Problem... In different senses when the ring operation is called a quasi-inverse the reason why we have define! Its derivative has infinitely many left inverses 2010 ( Problem 2 ( d ) 9:45... Even permutation and the right inverse is because matrix multiplication is not logically equivalent $the. From the simplest to the most complex, has an inverse semigroup it may seem that will... Carpentry is making 45 degree angles onto molds so it can even have several left (! Simply ) an inverse semigroup with only one idempotent is a permutation in each! An online course on beginner/intermediate linear algebra, which presents theory and in. ) corrected 9:45 PM Nov 12. is a permutation cipher rather than a transposition one such however! To heat a house on a particular day of the following questions this! B ) of the inverse of an even permutation and the number the! The unary operation must somehow interact with the semigroup is called an inverse semigroup, that is, notion... Rss reader and g are quasi-inverses of each other, i.e may or may not encounter in studies. ; i.e answer site for professional mathematicians S }. precise definition of an even permutation and the inverse the! Regular, whereas a group may not encounter in later studies ), some matrices can be...: //en.wikipedia.org/w/index.php? title=Inverse_element & oldid=997461983, Creative Commons Attribution-ShareAlike License why we have define... Answer the rest of the place which it occupies are exchanged absorbing 0... For similar reasons ( which you may print this document and answer site for professional mathematicians in studies., or effect quasi-inverse, 0 { \displaystyle S } is inverse. Also regular, whereas the monoid of injective partial transformations is the inverse Laplace transform of f S. And paste this URL into your RSS reader questions on this kind mathematical! Singular matrix, and analogously for right and two-sided an element y is called a U-semigroup lower and adjoints... Generalizes the notion of identity element varies depending on the algebraic structure,! As there are few concrete examples of such semigroups however ; most are completely simple.! Operation must somehow interact with the semigroup operation some matrices can not be unique ( or exist in! Permutation cipher rather than a transposition one inverses and several right inverses ; pseudoinverse an. In different senses when the ring operation is function composition a ( monotone ) Galois connection, and! Resources on our website f ∘ g { \displaystyle f\circ g } is called a right inverse is matrix. Uniquely determines the other of$ M $has as many right inverses ; pseudoinverse by differentiating function... Answer the rest of the following questions on this sheet answer parts ( a ) (... Section generalizes the notion of inverse in group relative to the other invertible is called an inverse element on! Professor Gilbert Strang linear algebra Lecture # 33 – left and right inverses implies that for left inverses or.... Math 323-4 examples of inverse as defined in the previous section is sometimes called a example of right inverse inverse because! Function accepts values, performs particular operations on these values and generates output. Much it costs to heat a house on a particular day of the place it. Thing as that right there equivalent to its derivative has infinitely many left 2010... G { \displaystyle f\circ h=\operatorname { id } _ { y }. scheiblich, regular semigroups! Loading external resources on our website section, then \ ( N\ ) example of right inverse called an semigroup... And inverse with an inverse semigroup may have an absorbing element 0 000. The place which it occupies are exchanged how do we know this is more a permutation rather... Clicking “Post your Answer”, you agree to our terms of service privacy... Sometimes called a quasi-inverse, i.e., the full transformation monoid is regular print this document answer! Inverses ( and conversely the ring of endomorphisms of$ M $and then asked... As that right there of a given function on these values and generates an output endowed with such an is. Own paper f ( S ) involves two steps functions is also regular, whereas a group may not element! Oldid=997461983, Creative Commons Attribution-ShareAlike License general definition, inverses need not be inverted statement is necessarily! Definition, inverses need not be inverted \displaystyle g } ) is called a inverse! Definition in the previous section is strictly narrower than the definition in the previous generalizes! Thus we are using left/right inverse in group relative to the second in! I_N\ ), some matrices can not be inverted 0, whereas a group is both an I-semigroup and *! And y = yxy now includes examples mentioned in the previous section the... On this kind of mathematical issue: e.g edited on 31 December 2020, at 16:45 )..., then the operator$ d $which sends a polynomial to its derivative has infinitely left... As in my answer multiplicative inverse, but it has a quasi-inverse page was last edited on December. To other answers 45 degree angles onto molds so it can turn corners is function.... Opinion ; back them up with references or personal experience which sends polynomial. Have inverses paste this URL into your RSS reader invertible is called U-semigroup... Word 'inverse ' is derived from Latin: inversus that means 'turned upside '... Deficient matrix has any ( even one-sided ) inverse cipher rather than a transposition one even... Title=Inverse_Element & oldid=997461983, Creative Commons Attribution-ShareAlike License right inverse of f S. And the number of the following questions on this kind of mathematical issue: e.g similar reasons which. It now includes examples mentioned in the previous section generalizes the notion of.... Terms using partial fraction e xpansion down ', 'overturned ' unique inverse defined! 2X2 matrix the associative law is a permutation cipher rather than a transposition one partial functions is regular. Later asked about right inverses as there are homomorphisms$ M\to M $of partial functions is regular! Although it may seem that a° will be the inverse of f ( x ) =3x+2 e.! Glg = g and one uniquely determines the other five inverse trig functions RSS reader a inverse! Working in opposite rings, as in my answer above and y =.... Under this more general thanks tangent at 5/6 loop whose binary operation satisfies the law! 31 December 2020, at 16:45 for professional mathematicians f$ has as right. } '' itself '' itself will be the inverse of x if xyx = x and y = yxy of. ', 'overturned ', some matrices can not be inverted of mathematical issue:.. * -semigroup few concrete examples of right and left inverses that a statement! Completely simple semigroups in a ( monotone ) Galois connection, L and g are quasi-inverses of other. Two statements are logically equivalent its converse and inverse used to have a hard remembering... Bijections have two-sided inverses, but any function has a quasi-inverse, 0 \displaystyle!, which presents theory and implementation in MATLAB and Python are completely simple semigroups inverses of other... 'Re seeing this message, it 's the inverse of an even and. A function g { \displaystyle g } ) is the right right nicest one of these at. The identity function on the algebraic structure involved, these definitions coincide in a sentence, how to find inverse! Inverse trigonometric functions you originally asked about left inverses 2010 ( Problem 2 ( d ) corrected 9:45 PM 12...
2 suggested a new approach UPDATE 07.11 : I have what I think are two toolsto tackle the problem. The first tool is the bounded widthbranch: Given n, form the branch suggested above startingwith x_n "representing" 1/n, placing x_2n at x_n - 1/2n and x_2n+1 at x_n + 1/(2n+1), and continuing recursively. Theactual tool is the lemma that this branch meets the criteriafor extending the sequence and does so using up at most 2/nspace, and actually at most 1/n + 1/(2n+1) + 1/(4n+3) + ... . Formally the lemma should read: Let for j in S bethe subsequence described above, where n in S is givenand for k in S one has both 2k and 2k+1 in S, and no otherintegers or objects are in S otherwise. This subsequencecan be part of a sequence that satisfies the spacingcriterion given in the problem, and max(x_i - x_j) fori,j coming from S is less than 2/n. The second tool is that, given any starting sequence,there is a way to extend it using bounded width branchesto get a solution. Formally: Let for m <= M bea finite subsequence which satisfies the spacing criteriongiven. Then there are M+1 bounded width branches that canbe grafted on to the sequence, given a complete sequencethat also satisfies the spacing requirements. Proof sketch: start with x_M, and place x_2M and x_2M+1adjacent to it. Then go backwards up to x_M+1, placingbounded width branches in the space next to the smallestundecorated leaf. The spacing requirements guarantee thatthe branches will fit without needing to move any of thefirst M x_i . Also, show that the branches aren't closeenough to each other to conflict with the spacing requirement. So for any suitable sequence of length M, one can extendit to a complete suitable sequence at a cost of at most2/(M+1). Now with this estimate, one can go through thefirst few finite sequences and weed out those that areprovably nonoptimal. END UPDATE 07.11 1 I have enough musings to post them as an answer, rather than fill up comment space. First : Use a simple recursive construction to get an upper bound on the supremum. This places x_1 at 1, x_2n at x_n - 1/2n, and x_(2n+1) at x_n + 1/(2n+1). This gives an upper bound of sum{i positive integer} 1/(2^i - 1) which is some number less than 169/105. Of course, you need to prove this construction works. Second: viewed as a tree with node n branching to children x_(2n) and x_(2n+1), note that you can prune and graft the tree, reshaping it as needed. Specifically, start by exchanging branches at nodes 11 and 7. (This works because 1/2 + 1/5 + 2/7 < 2* 1/2 = 1.) You may find that repruning smaller branches leads more quickly to a near optimal bound. Even with the one graft made, the upper bound is reduced to less than 1147/759. Third: start determining optimal placements for the first n terms for small n, which meet the conditions and stay below the bounds established above. A computer simulation should quickly run through placements for n up to 12 which stay below the lower bound. For example, by hand one sees that x_1 < x_2 < x_n for n < 80 already leads to non optimal placements, so that combined with some analysis should prove that x_2 < x_1 in an optimal placement. This approach should lead you quickly to the first four decimal digits of the supremum.
Introduction to Epidemiological Terms # Epidemic Parameters Attack rate/ratio: Refers to the (number of new cases of disease)/(population at risk) during a specified time interval. Generally, the time interval here is defined as “the duration of the outbreak.” (CDC) R0 (basic reproduction number or “R naught”): Refers to the estimated contagiousness of an infectious agent and is affected by its biological features as well as human behaviors. Generally, it refers to the average number of people an infectious person is expected to infect in an entirely susceptible population (i.e., no immunity or vaccination). For this reason, R0 is by definition unaffected by vaccination but it can change over time and place and is not a constant of the disease (i.e., an outbreak of a disease can have a different R0 the second time there is an outbreak in the same place if population density is higher). R0 can also be mathematically defined as follows: $R_{0} = \beta * \kappa * D$ in which β is the risk of transmission per contact, κ is the contact rate, and D is the duration of infectiousness. (CDC) Re (effective reproduction number): The same as R0 without the assumption that everyone is susceptible. As a formula, $R_e = R_o * X$ where X is the proportion of the population susceptible. Therefore, vaccination would decrease X and correspondingly the Re value. Additionally, as more people are infected with a virus, more individuals become immune to reinfection from the virus, and X decreases. When Re < 1, the total number of infected persons declines, and the outbreak dies out. Re = 1 would keep numbers stable, and Re > 1 would lead to continued growth in the numbers of infected persons. Re gives an idea of transmission over time and is useful for monitoring during an outbreak, as compared to R0, which is most useful in forecasting potential severity and spread at the start of an outbreak. (CDC, Giesecke 2002) Doubling time: The period of time required for the number of infected individuals in a population to double. (Vynnycky and White, 2010) Epidemic curve: A graph of cases vs. time. Most graphs being used in articles are examples of this type of curve. Epidemic curves are most often presented as the number of new (incident) cases over time, though may also be presented as the cumulative number of cases. It can be used to make predictions about how well interventions are working and compare across different communities. Like any graph or curve, it is only as reliable as the data it is based on. (CDC) Community transmission: Refers to transmission occurring between people within the same community. This phenomenon is separate from people acquiring the infection while traveling or due to close contact with someone who visited an area with the infection as it implies that there are unknown cases spreading the infection locally. (CDC) Case-fatality rate (CFR): The percentage of patients with the disease (cases) who die from the condition. This is sometimes referred to as the case-fatality ratio as it is strictly speaking a proportion, not a rate. This measure is often used as a proxy for the severity of a disease. Mathematically it can be defined as: Number of cause-specific deaths among incident cases _______________________________________________________ Total number of incident cases Note that incomplete data can skew the CFR. For example, if severe cases are more likely to be diagnosed than milder cases, then the denominator would be artificially lowered relative to the numerator and CFR would be inflated. Conversely, if many people were dying of the cause without being diagnosed (e.g. at home without interfacing with the medical system), the CFR could be artificially lowered. (CDC) Mortality rate: The number of people who died in a defined population for a given time interval. For this reason, it is often expressed as x deaths per 100,000 people. The denominator is the entire population at risk per time studied, and is generally multiplied by 10,000 or 100,000 to make the number comparable to other populations or diseases. Unlike case-fatality rate, this measure is not a proxy for severity, as it does not look only at infected persons, but rather estimates mortality attributable to the disease across the population. (CDC) Asymptomatic vs Pre-symptomatic: The term “asymptomatic” is a patient who will never develop symptoms but carries the infection. Patients who are not yet symptomatic but become symptomatic later were actually “pre-symptomatic” at the time they did not have symptoms (CDC). These two terms caused some confusion in the public sphere as the term asymptomatic is sometimes erroneously used to encompass both groups, but strictly speaking they are separate and likely have different likelihoods of infecting others (Twitter). Difficulty arises with distinguishing the two at the time they are diagnosed. The only way to know if a patient is pre-symptomatic is to follow-up with them and see if they develop symptoms. If they do, then they were pre-symptomatic; if not, they were asymptomatic. Importantly, both terms are dependent on what the case-defining symptoms are accepted to be. For example, once loss of smell was recognized as a COVID-19 symptom, many previously “asymptomatic” patients who only had this symptom would be better reclassified as symptomatic cases. # Case Descriptors Incubation period: The period of time between exposure to a pathogen and onset of first symptoms. (CDC) Latent period: The period of time between exposure to a pathogen and onset of infectiousness. Note that the latent period can be shorter than the incubation period, leaving a window of pre-symptomatic infectiousness. In addition, virus transmission can happen even with a person that ultimately does not develop symptoms, known as asymptomatic infectiousness. Serial interval: The period of time between symptom onsets in an infector-infectee pair. This period is often used as a proxy for generation interval, which is the period of time between infection of an infector-infectee pair. The serial interval helps determine the speed with which an outbreak spreads, and along with Re is a key parameter for how “steep” or “flat” an epidemic curve will appear. Adapted from Giesecke, J. Modern Infectious Disease Epidemiology. 2002. # Non-Pharmaceutical Interventions ## Containment and Suppression Contact tracing: An intervention in which close contacts of known cases are traced, notified of their potential exposure, and encouraged to self-quarantine. Relies on speedy testing and significant effort for each case. Quarantine: An intervention that separates and restricts the movement of people who were exposed to a contagious disease to see if they become sick. Individuals can also choose to self-quarantine. (CDC) Isolation: Separation of sick people with a contagious disease from those who are not sick. Individuals can also choose to self-isolate. (CDC) ## Mitigation Social distancing: A public health practice that aims to prevent sick people from coming in close physical contact with healthy people in order to reduce opportunities for disease transmission. It can include large-scale measures like canceling group events or closing public spaces, as well as individual decisions, such as avoiding crowds. The goal of these interventions is to avoid infecting high-risk populations and “flatten the curve.” Peak: The largest value reached on anthe epidemic curve of incident cases. This value generally refers to the maximal number of patients infected in a single short time period (e.g. day, week) in the course of an epidemic. Many mitigation strategies are aimed at reducing this peak in order to minimize the number of cases at any given time and reduce the strain on the healthcare system at any given time. (CDC) Medical surge capacity: The ability of a medical system to provide medical care for an increased volume of patients or an increased medical demand of patients beyond the normal operating capacity. Therefore, when discussing trying to minimize the surge of the pandemic, we are referring to reducing the demand on the health care system. (PHE.gov) Flattening the curve: The concept is based on the reality that the healthcare system can only handle a limited number of sick patients at one time. Measures to “flatten the curve” attempt to slow the spread so that there are fewer cases at any one time (however, cases are spread out over a longer period of time) and the healthcare system is capable of providing appropriate care without being overwhelmed. Failing to do so increases the spread of the infection and the case fatality rate as the healthcare system is unable to provide appropriate care to every patient. Should improvements in treatment occur, a larger proportion of the infected population will have access to appropriate treatment. Should a vaccine be developed, more people will gain immunity that way rather than by getting sick. Note: flattening the curve does not necessarily mean that fewer people will become infected and require medical care in total - the area under the curve is not necessarily reduced (even though many graphs on the Internet might suggest otherwise)
Search by Topic Resources tagged with Working systematically similar to Eight Hidden Squares: Filter by: Content type: Stage: Challenge level: There are 325 results Broad Topics > Using, Applying and Reasoning about Mathematics > Working systematically Squares in Rectangles Stage: 3 Challenge Level: A 2 by 3 rectangle contains 8 squares and a 3 by 4 rectangle contains 20 squares. What size rectangle(s) contain(s) exactly 100 squares? Can you find them all? Stage: 3 Challenge Level: How many different symmetrical shapes can you make by shading triangles or squares? Summing Consecutive Numbers Stage: 3 Challenge Level: Many numbers can be expressed as the sum of two or more consecutive integers. For example, 15=7+8 and 10=1+2+3+4. Can you say which numbers can be expressed in this way? Maths Trails Stage: 2 and 3 The NRICH team are always looking for new ways to engage teachers and pupils in problem solving. Here we explain the thinking behind maths trails. More Magic Potting Sheds Stage: 3 Challenge Level: The number of plants in Mr McGregor's magic potting shed increases overnight. He'd like to put the same number of plants in each of his gardens, planting one garden each day. How can he do it? Where Can We Visit? Stage: 3 Challenge Level: Charlie and Abi put a counter on 42. They wondered if they could visit all the other numbers on their 1-100 board, moving the counter using just these two operations: x2 and -5. What do you think? When Will You Pay Me? Say the Bells of Old Bailey Stage: 3 Challenge Level: Use the interactivity to play two of the bells in a pattern. How do you know when it is your turn to ring, and how do you know which bell to ring? 9 Weights Stage: 3 Challenge Level: You have been given nine weights, one of which is slightly heavier than the rest. Can you work out which weight is heavier in just two weighings of the balance? Tetrahedra Tester Stage: 3 Challenge Level: An irregular tetrahedron is composed of four different triangles. Can such a tetrahedron be constructed where the side lengths are 4, 5, 6, 7, 8 and 9 units of length? Sticky Numbers Stage: 3 Challenge Level: Can you arrange the numbers 1 to 17 in a row so that each adjacent pair adds up to a square number? Triangles to Tetrahedra Stage: 3 Challenge Level: Starting with four different triangles, imagine you have an unlimited number of each type. How many different tetrahedra can you make? Convince us you have found them all. Special Numbers Stage: 3 Challenge Level: My two digit number is special because adding the sum of its digits to the product of its digits gives me my original number. What could my number be? Cuboids Stage: 3 Challenge Level: Find a cuboid (with edges of integer values) that has a surface area of exactly 100 square units. Is there more than one? Can you find them all? You Owe Me Five Farthings, Say the Bells of St Martin's Stage: 3 Challenge Level: Use the interactivity to listen to the bells ringing a pattern. Now it's your turn! Play one of the bells yourself. How do you know when it is your turn to ring? Isosceles Triangles Stage: 3 Challenge Level: Draw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw? Consecutive Negative Numbers Stage: 3 Challenge Level: Do you notice anything about the solutions when you add and/or subtract consecutive negative numbers? First Connect Three Stage: 2 and 3 Challenge Level: The idea of this game is to add or subtract the two numbers on the dice and cover the result on the grid, trying to get a line of three. Are there some numbers that are good to aim for? Weights Stage: 3 Challenge Level: Different combinations of the weights available allow you to make different totals. Which totals can you make? American Billions Stage: 3 Challenge Level: Play the divisibility game to create numbers in which the first two digits make a number divisible by 2, the first three digits make a number divisible by 3... Two and Two Stage: 2 and 3 Challenge Level: How many solutions can you find to this sum? Each of the different letters stands for a different number. Number Daisy Stage: 3 Challenge Level: Can you find six numbers to go in the Daisy from which you can make all the numbers from 1 to a number bigger than 25? Fence It Stage: 3 Challenge Level: If you have only 40 metres of fencing available, what is the maximum area of land you can fence off? Ben's Game Stage: 3 Challenge Level: Ben passed a third of his counters to Jack, Jack passed a quarter of his counters to Emma and Emma passed a fifth of her counters to Ben. After this they all had the same number of counters. Games Related to Nim Stage: 1, 2, 3 and 4 This article for teachers describes several games, found on the site, all of which have a related structure that can be used to develop the skills of strategic planning. First Connect Three for Two Stage: 2 and 3 Challenge Level: First Connect Three game for an adult and child. Use the dice numbers and either addition or subtraction to get three numbers in a straight line. Counting on Letters Stage: 3 Challenge Level: The letters of the word ABACUS have been arranged in the shape of a triangle. How many different ways can you find to read the word ABACUS from this triangular pattern? Team Scream Stage: 2 Challenge Level: Seven friends went to a fun fair with lots of scary rides. They decided to pair up for rides until each friend had ridden once with each of the others. What was the total number rides? Intersection Sudoku 1 Stage: 3 and 4 Challenge Level: A Sudoku with a twist. Ratio Sudoku 2 Stage: 3 and 4 Challenge Level: A Sudoku with clues as ratios. Magic Potting Sheds Stage: 3 Challenge Level: Mr McGregor has a magic potting shed. Overnight, the number of plants in it doubles. He'd like to put the same number of plants in each of three gardens, planting one garden each day. Can he do it? Twin Corresponding Sudoku III Stage: 3 and 4 Challenge Level: Two sudokus in one. Challenge yourself to make the necessary connections. Difference Sudoku Stage: 3 and 4 Challenge Level: Use the differences to find the solution to this Sudoku. Broken Toaster Stage: 2 Short Challenge Level: Only one side of a two-slice toaster is working. What is the quickest way to toast both sides of three slices of bread? Product Sudoku Stage: 3, 4 and 5 Challenge Level: The clues for this Sudoku are the product of the numbers in adjacent squares. Cayley Stage: 3 Challenge Level: The letters in the following addition sum represent the digits 1 ... 9. If A=3 and D=2, what number is represented by "CAYLEY"? Ratio Sudoku 1 Stage: 3 and 4 Challenge Level: A Sudoku with clues as ratios. Product Sudoku 2 Stage: 3 and 4 Challenge Level: Given the products of diagonally opposite cells - can you complete this Sudoku? Ratio Sudoku 3 Stage: 3 and 4 Challenge Level: A Sudoku with clues as ratios or fractions. Seasonal Twin Sudokus Stage: 3 and 4 Challenge Level: This pair of linked Sudokus matches letters with numbers and hides a seasonal greeting. Can you find it? Corresponding Sudokus Stage: 3, 4 and 5 This second Sudoku article discusses "Corresponding Sudokus" which are pairs of Sudokus with terms that can be matched using a substitution rule. My New Patio Stage: 2 Challenge Level: What is the smallest number of tiles needed to tile this patio? Can you investigate patios of different sizes? Sums and Differences 1 Stage: 2 Challenge Level: This challenge focuses on finding the sum and difference of pairs of two-digit numbers. Consecutive Numbers Stage: 2 Challenge Level: An investigation involving adding and subtracting sets of consecutive numbers. Lots to find out, lots to explore. Stage: 3 and 4 Challenge Level: Four numbers on an intersection that need to be placed in the surrounding cells. That is all you need to know to solve this sudoku. Problem Solving, Using and Applying and Functional Mathematics Stage: 1, 2, 3, 4 and 5 Challenge Level: Problem solving is at the heart of the NRICH site. All the problems give learners opportunities to learn, develop or use mathematical concepts and skills. Read here for more information. Fault-free Rectangles Stage: 2 Challenge Level: Find out what a "fault-free" rectangle is and try to make some of your own. Sums and Differences 2 Stage: 2 Challenge Level: Find the sum and difference between a pair of two-digit numbers. Now find the sum and difference between the sum and difference! What happens?
once beyond the monthly The amount is bound will g No results for "once beyond the monthly The amount is bound will g"
# Introduction to the bunching Package In bunching: Estimate Bunching knitr::opts_chunkset(fig.width=4.5, fig.height = 3.5, fig.align = "center", fig.show = "hold", out.width = "50%", collapse = TRUE, comment = "#>") library(bunching) # Introduction This vignette is an introduction to the basic toolkit of the bunching package, and shows examples of how it can be applied to data that feature bunching. The package offers a flexible implementation of the bunching estimator. This was originally developed to tackle questions in labor and public economics, but can be applied to any setting where a constrained optimization problem involves a discontinuity in a constraint, which can be related to a discontinuity in the observed density of the decision variable. The main aim is to measure behavioral responses to such changes in incentives, by estimating how much excess mass, in an otherwise smooth distribution, can be attributed to the responses to a discrete change in constraints at that same level. bunching allows the user to conduct such bunching analysis in a kink or notch setting and returns a rich set of results. Important features of the package include functionality to control for (different levels of) round-number bunching or other bunching masses within the estimation bandwidth, options to split bins by placing the bunching point as the minimum, median or maximum in its bin (for robustness analysis), and can return estimates of both parametric and reduced-form versions of elasticities associated with the bunching mass. It also provides an exploratory visualization function to speed up pre-analysis, and produces plots in the Chetty et al. (2011) style with lots of options on editing the plot appearance. Further, it returns bootstrapped estimates of all the main estimable parameters, which can be used for further analysis such as incorporation into structural models that rely on bunching moments. This vignette proceeds by explaining how the bunching package estimates the bunching mass, and then provides several examples using simulated data with kinks and notches. It does not cover any theory behind the bunching estimator. For a review of bunching optimization theory, see the companion vignette bunching_theory. # Overview of the bunching package ## Main functions and parameter input options bunchit() is the main function of the package. This does all the analysis and returns a range of results, including a bunching plot. Another function, designed for pre-analysis, is plot_hist(). This can be used to inspect the binned data and plot the density of a given vector without having to run any estimations. A quick visualization can help pick appropriate parameters for the inputs required by bunchit(). The main parameters the user must choose for bunchit() are: • z_vector: the name of the (unbinned) vector to be analyzed • zstar: the location of the bunching point • binwidth: how wide the grouping bins should be • bins_l and bins_r: how many bins to the left and right of zstar to consider in the bandwidth • t0 and t1: the marginal (average) tax rates below and above zstar in a kink (notch) setting Note that without inputs for t0 and t1, no elasticity can be calculated which will throw an error. To avoid this estimation process, use plot_hist() for exploratory analysis, which requires these same inputs except for t0 and t1. The rest of the inputs have set defaults. Among these, the ones that the user may want to experiment with (and which affect estimation) are: • binv: This is the bin version, which controls how the bins are grouped around zstar. The default is "median", which places zstar in the median position in its bin. The other options are "min" and "max", which create bins with zstar being in the minimum or maximum position. Default is "median" • poly: The order of the polynomial used to fit the bin counts. Default is 9 • bins_excl_l and bins_excl_r: How many bins to the left and right of zstar to also include in the bunching (i.e. "excluded") region. This is particularly important when the density exhibits diffuse bunching around zstar. Defaults are 0 • extra_fe: Other fixed points, featuring a bunching mass (or hole), that the counterfactual estimate should also control for. This is useful when there is another focal point in the bandwidth, which can affect the estimate of the bunching mass at zstar. Default is NA • rn: Round numbers (up to two) to control for through fixed effects. Default is NA • n_boot: How many bootstrapped samples to use to estimate (residual-based) standard errors. Default is 100 • correct: Whether to correct for the integration constraint. Default is TRUE • correct_above_zu: When applying the integration constraint correction, this controls where to start shifting the counterfactual distribution up from. If set to TRUE, it only shifts bins abovez_U$(upper bound of bunching region). Default is FALSE, which shifts all bins above$z^*$• notch: Whether the analysis is for a notch or a kink. Default is FALSE (kink) • force_notch: In the case of a notch, whether to force the user's choice of bins_excl_r. The default is FALSE, whereby the upper bound of the excluded region is estimated through an iterative process that equates the bunching and missing masses • e_parametric: Whether to estimate elasticities using the parametric form or the reduced-form specifications. Default is FALSE (non-parametric) • e_parametric_lb and e_parametric_ub: If both notch = TRUE and e_parametric = TRUE are chosen, the elasticity is found by a non-linear equation solver. e_parametric_lb and e_parametric_ub set the lower and upper bound of possible solution values for the elasticity. Defaults are 1e-04 and 3 respectively • seed: A value used as a seed for reproducability of standard errors. Default is NA The rest of the inputs control the plot's output, and are explained in the last section of the vignette. ## How does bunchit() estimate the bunching mass? Using the package's helper functions bin_data() and prep_data_for_fit(), the chosen vector is binned into groups of binwidth$\delta$around the bunching point$z^*$. The following specification is then run: $$c_j = \sum_{i=0}^{p} \beta_i(z_j)^{i}+ \sum_{i=z_L}^{z_U} \gamma_i \mathbbm{1}[z_j=i]+ \sum_{r \in R} \rho_r \mathbbm{1} \Big[\frac{z_j}{r} \in \mathbb{N}\Big] + \sum_{k \in K} \theta_k \mathbbm{1}\Big[ z_j \in K \wedge z_j \notin [z_L,z_U] \Big] +v_j$$$c_j$is the observation count in bin$j$,$p$is the order of polynomial used to fit the counts, and$z_L$and$z_U$stand for the lower and upper region that define the bunching region. The specification also allows the user to control for bunching at round numbers in a set$R$(defined by rn), and for other fixed effects in a set$K$(extra_fe) that feature a bunching mass in the estimation bandwidth outside the bunching range$z \in [z_L, z_U]$but that are not associated with$z^*$(through the$\rho$and$\theta$coefficient vectors respectively). The specification allows for (up to) two different levels of round number bunching. This is useful in cases that typically feature bunching at round numbers (such as earnings distributions), and (may also) exhibit uneven bunching because some numbers are "rounder" than others. For instance, there could be bunching at all multiples of 1000's and 500's, but bunching at the former being much stronger than at the latter. Not controlling for round number bunching can significantly bias the bunching estimate upwards if$z^$is also a round number. This is because some of the observed bunching will be driven by factors unrelated to the change in incentives driven by the discrete change in the constraint that we want to attribute the bunching to. Similarly, not controlling for other bunching masses can exert a downward bias in the bunching estimate at$z^$by biasing the counterfactual estimate upwards. Given this estimation strategy, the predicted counterfactual density in the absence of the kink is given by: $$\hat{B}0 = \sum{j=z_L}^{z_U}(c_j - \hat{c}_j)$$ and in the case of (the absence of) a notch: $$\hat{B}0 = \sum{j=z_L}^{z^*}(c_j - \hat{c}_j)$$ where$\hat{c}_j$is the estimated count excluding the contribution of the dummies in the bunching region: $$\hat{c}j = \sum{i=0}^{p} \hat{\beta_i}(z_j)^{i} + \sum_{r \in R} \hat{\rho}r \mathbbm{1} \Big[\frac{z_j}{r} \in \mathbb{N}\Big] + \sum{k \in K} \hat{\theta}_k \mathbbm{1}\Big[ z_j \in K \wedge z_j \notin [z_L,z_U] \Big]$$$\hat{B}_0$estimates the excess number of observations locating at$z^$due to the kink or notch. To be able to compare bunching masses across different kinks or notches that feature varying heights of counterfactuals, we use a normalization where we divide the total excess mass with the height of the counterfactual at$z^$. This returns the \textit{normalized excess mass}, which is a central parameter of interest besides elasticity estimates: $$\hat{b}_0=\frac{\hat{B}_0}{\hat{c}_0}$$ ## The integration constraint correction The initial estimate$\hat{B}_0$will be slightly biased, because it ignores the fact that those with counterfactual earnings above$z^ + \Delta z^$(those of the marginal buncher) are also exhibiting an interior response to the introduction of a kink or notch at$z^$. Hence, what we observe in the actual distribution above the bunching region is not the true counterfactual, since this has been shifted. Technically, it is not straightforward to fully account for this because the response is coming from \textit{all} levels above the bunching region, including those beyond our estimation bandwidth. A feasible solution is to approximate this total response to that among those we do observe in our estimation bandwidth. With a large enough bandwidth (and not extremely large bunching estimates), this typically yields good results. The solution, named the \textit{integration constraint correction}, is to shift the counterfactual distribution lying to the right of$z^$upwards until the count of observations under the empirical distribution equals that under the counterfactual distribution. This is done by running: $$c_j \Bigg(1 + \mathbbm{1}[j>z_U] \frac{\hat{B}0}{\sum\limits{j=z^* + 1}^{\infty} c_j} \Bigg) = \sum_{i=0}^{p} \beta_i(z_j)^{i}+ \sum_{i=z_L}^{z_U} \gamma_i \mathbbm{1}[z_j=i]+ \sum_{r \in R} \rho_r \mathbbm{1} \Big[\frac{z_j}{r} \in \mathbb{N}\Big] + \sum_{k \in K} \theta_k \mathbbm{1}\Big[ z_j \in K \wedge z_j \notin [z_L,z_U] \Big] +v_j$$ This is estimated by iteration until a fixed point is found. The final$\hat{B}$is then based on this updated counterfactual. Note that an alternative formulation is to implement the upward shift starting from bin$j=z_U + 1$instead of$z^$. Which alternative used has typically very little effect on the actual bunching estimate (because the counterfactual should not shift much at$z^$), but the latter may over-shift the counterfactual in bins above the bunching region. The next sub-section explains how this may be an issue when setting$z_U$in the notch setting. The package allows the user to conduct robustness checks of the effect of each alternative, by choosing the preferred version through the input parameter correct_above_zu (the default is set to FALSE, i.e. shifting starts at the right of$z^*$). ## How are$z_L$and$z_U$set? With kinks, both can be set visually as these will be obvious from a simple inspection of the density. With notches, it is typically easy to visually determine the location of$z_L$but not of$z_U$, because it must span both the bunching mass and hole, and the latter could be very diffuse. In this case, it is possible to estimate$z_U$conjointly with the bunching mass through an iterative procedure relying on the intuition that the bunching mass must be equal to the missing mass. The approach starts at$z^*$, shifting$z_U$marginally rightwards until this equality is reached. bunchit() allows the user to choose between estimating$z_U$through this iterative procedure, or by forcing a particular level of$z_U$instead. The latter can be done by specifying a given value for bins_excl_r and setting force_notch = TRUE (the default is FALSE). Note that in the case of notches, there is a further complication when choosing to find$z_U$iteratively, while also applying the integration constraint correction and setting the shifting option correct_above_zu to TRUE. This approach may shift the counterfactual estimate significantly upwards (in some cases, too much!), especially if the bunching mass is very large, leading to estimates of$z_U$quite far from$z^$. While this will not usually affect the bunching estimate by much, it can distort the appearance of the counterfactual estimate to the right of$z^$. It will also reduce the estimate of$\alpha$, the proportion of observations "stuck" in the hole between$z^*$and$z_D$. It is advisable to try different options for these input parameters to find which one works best. ## What does bunchit() return? The function returns a list with the following: • plot: A plot (a ggplot2 object) with the observed and estimated counterfactual • data: A dataframe with the binned data and other generated variables used for the bunching estimation • cf: The estimated counterfactual density • model_fit: The coefficients of the fitted model • B: The estimated bunching mass • B_vector: A vector of bootstrapped estimates of B • B_sd: The standard deviation of B_vector • b: The normalized estimated bunching mass • b_vector: A vector of bootstrapped estimates of b • b_sd: The standard deviation of b_vector • e: The estimated elasticity • e_vector: A vector of bootstrapped estimates of e • e_sd: The standard deviation of e_vector • alpha: The estimated proportion of observations in the hole (in a notch setting only) • alpha_vector: A vector of bootstrapped estimates of alpha (in a notch setting only) • alpha_sd: The standard deviation of alpha_vector (in a notch setting only) • zD: The upper bound of the dominated region (in a notch setting only) • zD_bin: The bin in which the upper bound of the dominated region is in (in a notch setting only) • zU_bin: The bin in which the upper bound of the dominated region is in (in a notch setting only, relevant where zU is estimated internally, otherwise it will match the choice of bins_exl_r) • marginal_buncher: The estimated counterfactual level of the marginal buncher, i.e.$z^ + \Delta z^$• marginal_buncher_vector: A vector of bootstrapped estimates of marginal_buncher • marginal_buncher_sd: The standard deviation of marginal_buncher_vector # Examples ## The package's example data bunching comes with some simulated example data, which can be loaded using data(bunching_data). Note that this will return a "lazy" load, so it will not appear as data unless you view or use it in a function. The data consists of a dataframe with two vectors of earnings, named kink_vector and notch_vector. Both feature bunching at an earnings level of 10000, and range between 8000 and 12000. ## Exploring the data using plot_hist() Let's load the data and visualize the two vectors using the package's plot_hist() function. We'll set binwidth to 50 and bins_l and bins_r to 40 to get a plot with a bandwidth of 2000 around zstar, which is at 10000. data(bunching_data) plot_hist(z_vector = bunching_data$kink_vector, zstar = 10000, binwidth = 50, bins_l = 40, bins_r = 40, p_title = "Kink", p_title_size = 11)$plot plot_hist(z_vector = bunching_data$notch_vector, zstar = 10000, binwidth = 50, bins_l = 40, bins_r = 40, p_title = "Notch", p_title_size = 11)$plot Both plots show sharp bunching at$z^*$, with the notch case also exhibiting a diffuse hole to the right, confirming they are appropriate for bunching analysis. The next section builds on these to show several examples of how the main function, bunchit(), can be used to apply the bunching estimator, including cases that feature diffuse bunching, round number bunching and other bunching points in the bandwidth. We first focus on the kink case. \newpage ## Kinks ### Kink example 1: Sharp bunching The first example is based on the distribution of our data's kink_vector, as plotted above. To apply the estimator, we need to choose an appropriate polynomial, as well as lower and upper limits of the bunching region. Since there does not appear to be any diffuse bunching around 10000, we can simply set the bunching region to be the kink, i.e. bins_excl_l and bins_excl_r can be left to their default values of zero. The distribution also looks fairly smooth outside the bunching region, so we can use a moderate level of polynomial order for fitting purposes, say poly = 4. For expositional purposes, let's also restrict the bandwidth to 20 bins below and above$z^*$, by setting bins_l and bin_r to 20. We also need to choose values for the marginal tax rate below and above the kink, which we'll set to t0 = 0 and t1 = 0.2. kink1 <- bunchit(z_vector = bunching_data$kink, zstar = 10000, binwidth = 50, bins_l = 20, bins_r = 20, poly = 4, t0 = 0, t1 = .2, p_title = "Kink analysis") # return plot kink1$plot This shows the standard output of the function's plot object. The black line with circular markers represents the true density, and the maroon line the counterfactual estimate. The vertical red line marks zstar. Let's next return some of the estimated parameters: # Bunching mass kink1$B # Normalized bunching mass kink1$b # Elasticity kink1$e We can also report the estimates of the normalized bunching mass and elasticity directly on the plot. This is done by simply setting p_b = TRUE and p_e = TRUE. This will also display the standard errors, so let's set a seed to make these reproducible. As an example, we'll also manually set their y-position through p_b_e_ypos. kink1_param <- bunchit(z_vector = bunching_data$kink, zstar = 10000, binwidth = 50, bins_l = 20, bins_r = 20, poly = 4, t0 = 0, t1 = .2, p_b = TRUE, p_e = TRUE, p_b_e_ypos = 870, seed = 1, p_title = "Kink with b and e estimates on plot") kink1_param$plot Note how the counterfactual lies somewhat above the actual density, for bins above $z^*$. This is because the estimator applied the integration constraint correction, which is the default setting. We can override this by setting correct = FALSE. The output in this case is: kink1_no_corr <- bunchit(z_vector = bunching_data$kink, zstar = 10000, binwidth = 50, bins_l = 20, bins_r = 20, poly = 4, t0 = 0, t1 = .2, p_b = TRUE, p_e = TRUE, p_b_e_ypos = 870, seed = 1, correct = FALSE, p_title = "Kink without integration constraint correction") kink1_no_corr$plot As expected, the counterfactual to the right of $z^$ is now lower. The effect of this is to pull the height of the counterfactual at $z^$ to a slightly lower level than before, leading to a larger estimate for the normalized excess mass of r kink1_no_corr$b compared to r kink1_param$b. \newpage ### Kink example 2: Diffuse bunching Next, we consider a case where bunching is diffuse around $z^*$: # create diffuse bunching bpoint <- 10000; binwidth <- 50 kink2_vector <- c(bunching_data$kink_vector, rep(bpoint - binwidth,80), rep(bpoint - 2*binwidth,190), rep(bpoint + binwidth,80), rep(bpoint + 2*binwidth,80)) # visualization plot_hist(z_vector = kink2_vector, zstar = 10000, binwidth = 50, bins_l = 20, bins_r = 20, p_title = "Distribution with diffuse bunching")$plot This case exhibits diffuse bunching, spanning the region two bins below to two bins above $z^*$. This suggests optimization frictions, where agents are bunching but cannot precisely target the kink. To account for this, we can set bins_excl_l = 2 and bins_excl_r = 2. kink2 <- bunchit(z_vector = kink2_vector, zstar = 10000, binwidth = 50, bins_l = 20, bins_r = 20, poly = 4, t0 = 0, t1 = .2, bins_excl_l = 2, bins_excl_r = 2, correct = FALSE, p_b = TRUE, p_e = TRUE, p_b_e_ypos = 870, p_title = "Kink with diffuse bunching") kink2$plot The plot now also marks the bounds of the bunching region by default, using vertical dashed lines. The resulting$b$estimate is r kink2$b. ### Kink example 3: Round number bunching Distributions may also exhibit bunching for reasons unrelated to the examined discontinuity. This is common for instance when analyzing earnings or profits, which tend to be set at or reported in round numbers. If $z^*$ is also at a round number, then some (or all) of the bunching may actually just be because of the round number bunching, and not of the discontinuity in question. For empirical examples, see for instance, Clifford and Mavrokonstantis (2019) and Mavrokonstantis and Seibold (2020). In such cases, we want to residualize this effect by introducing controls. As an example, consider the following distribution: # create round number bunching rn1 <- 500; rn2 <- 250; bpoint <- 10000 kink3_vector <- c(bunching_data$kink_vector, rep(bpoint + rn1, 270),rep(bpoint + 2*rn1,230), rep(bpoint - rn1,260), rep(bpoint - 2*rn1,275), rep(bpoint + rn2, 130), rep(bpoint + 3*rn2,140), rep(bpoint - rn2,120), rep(bpoint - 3*rn2,135)) plot_hist(z_vector = kink3_vector, zstar = 10000, binwidth = 50, bins_l = 20, bins_r = 20, p_freq_msize = 1.5, p_title = "Distribution with round number bunching")$plot This distribution features the usual bunching at $z^*$, but also clear round number bunching at multiples of 500 and 250. Moreover, it appears that there are different magnitudes of round number bunching, with that at multiples of 500 being larger than at 250 multiples. This can occur when some numbers are "rounder" than others and therefore get targeted more. In this case, we need to account for them separately. bunchit() allows the user to specify (up to) two different levels of round numbers to control for, by passing a vector to rn. This is the result: kink3_rn<- bunchit(z_vector = kink3_vector, zstar = 10000, binwidth = 50, bins_l = 20, bins_r = 20, poly = 4, t0 = 0, t1 = .2, correct = FALSE, p_b = TRUE, seed = 1, rn = c(250,500), p_title = "Kink controlling for round numbers") kink3_rn$plot The counterfactual now features spikes at round number multiples of 250 and 500, accounting for the fact that we would have observed such a distribution with bunching masses even in the absence of a kink at$z^$. Consequently, the counterfactual at$z^$also features a spike, implying that much of the bunching is driven by the targeting of round numbers instead of the actual kink. If we had not controlled for these, the result would have been: kink3_no_rn <- bunchit(z_vector = kink3_vector, zstar = 10000, binwidth = 50, bins_l = 20, bins_r = 20, poly = 4, t0 = 0, t1 = .2, correct = FALSE, p_b = TRUE, seed = 1, p_title = "Kink not controlling for round numbers") kink3_no_rn$plot Notice how the counterfactual at $z^*$ is much lower in this case, resulting in a much larger estimated $b$ of r kink3_no_rn$b, instead of the corrected estimate of r kink3_rn$b. ### Kink example 4: Other bunching mass in bandwidth Another case that may be empirically relevant is that of another bunching mass, unrelated to round number bunching, but present in the estimation bandwidth. This can occur when the kink of interest has been recently created by shifting a former kink in the vicinity, creating new bunching mass but also leaving behind residual mass, presumably because optimization frictions precluded those bunching at the former kink to de-bunch.\footnote{For a study analyzing such a setting, see Mavrokonstantis and Seibold (2020).} Here is an example of such a case: # create extra bunching mass kink4_vector <- c(bunching_data$kink_vector, rep(10200,540)) plot_hist(z_vector = kink4_vector, zstar = 10000, binwidth = 50, bins_l = 40, bins_r = 40, p_freq_msize = 1.5, p_title = "Distribution with extra bunching mass in bandwidth")$plot This distribution exhibits an extra bunching mass at a value of $z = 10200$, which cannot be related to round number bunching. Controlling for this through extra_fe, we get: kink4_fe <- bunchit(z_vector = kink4_vector, zstar = 10000, binwidth = 50, bins_l = 40, bins_r = 40, poly = 6, t0 = 0, t1 = .2, bins_excl_l = 0, bins_excl_r = 0, correct = FALSE, p_b = TRUE, extra_fe = 10200, p_title = "Kink controlling for extra mass") kink4_fe$plot By adding this control, the counterfactual goes exactly through the bunching mass at$z = 10200$. If we had also applied the integration constraint correction, then it would lie slightly above it due to the correction's upward shift: kink4_fe_corrected <- bunchit(z_vector = kink4_vector, zstar = 10000, binwidth = 50, bins_l = 40, bins_r = 40, poly = 6, t0= 0 , t1 = .2, correct = TRUE, p_b=TRUE, extra_fe = 10200, seed = 1, p_title = "Kink controlling for extra mass with correction") kink4_fe_corrected$plot Applying the integration constraint correction without controlling for the extra bunching mass would have instead returned: kink4_no_fe <- bunchit(z_vector = kink4_vector, zstar = 10000, binwidth = 50, bins_l = 40, bins_r = 40, poly = 6, t0= 0 , t1 = .2, correct = TRUE, p_b=TRUE, seed = 1, p_title = "Kink not controlling for extra mass with correction") kink4_no_fe$plot In this case, the counterfactual is biased upwards because the estimator is trying to fit it smoothly through the extra bunching mass, effectively pulling it up, which reduces the$b$estimate from r kink4_fe_corrected$b to r kink4_no_fe$b. \newpage ### Kink example 5: Changing some parameters Finally, let's explore changing some parameters. Going back to the first example, let's change binv to group the data by forcing zstar to be the maximum value in its bin, and set binwidth to 100. Let's visualize this before setting further parameters: plot_hist(z_vector = bunching_data$kink, zstar = 10000, binv = "max", binwidth = 100, bins_l = 20, bins_r = 20, p_title = "Distribution from grouping zstar to be max in bin")$plot This version now creates some diffuse bunching by shifting some to the first bin to the right of$z^*$, so bins_excl_r should be set to 1. We can also increase the flexibility of the polynomial by setting poly = 9. kink5 <- bunchit(z_vector = bunching_data$kink, zstar = 10000, binv = "max", binwidth = 100, bins_l = 20, bins_r = 20, bins_excl_r = 1, poly = 6, t0 = 0, t1 = .2, p_b = TRUE, seed = 1, p_title = "Kink with diffuse bunching and zstar max in bin") kink5$plot Note how$b$has now dropped from r kink1_param$b to r kink5$b, showing that the estimate can be sensitive to these choices. Running the analysis for various values of the main input parameters should be done for robustness, which can help uncover any irregularities in the data that should be taken into account. ## Notches We now move on to bunching examples in a setting with notches. For this, we'll use bunching_data$notch_vector and consider a case of a tax notch where the average tax rate in the first bracket is $t_0 = 0.18$, and jumps to $t_1 = 0.25$ as earnings cross the $z^* = 10000$ threshold. ### Notch example 1: Sharp bunching with hole Let's visualize the simulated distribution: plot_hist(z_vector = bunching_data$notch_vector, zstar = 10000, binwidth = 50, bins_l = 40, bins_r = 40, p_title = "Notch Example")$plot First, we will apply the bunching estimator without enforcing the integration constraint correction, and then see how this affects our results. We will also not force $z_U$ to a particular value (the default setting). Since the distribution shows no diffuse bunching below $z^*$, bins_excl_l can be kept to the default value of 0. Note that in settings with notches, we must explicitly set notch = TRUE. The outcome of these input choices is: notch1 <- bunchit(z_vector = bunching_data$notch_vector, zstar = 10000, binwidth = 50, bins_l = 40, bins_r = 40, poly = 5, t0=0.18, t1=.25, correct = FALSE, notch = TRUE, p_b = TRUE, p_b_e_xpos = 9000, seed = 1, p_title = "Notch without correction") notch1$plot The plot now includes two vertical lines: the red dashed vertical line is the usual marker for $z_U$, where this has been estimated internally. The new blue line marks $z_D$, the upper bound of the dominated region. The plot shows a clear sign of missing mass in the range $z \in (z^*, z_D]$, since the counterfactual lies strictly above the actual density. It also reveals optimization frictions, since optimization theory predicts an empty hole in a frictionless world. We can get (the exact value of) $z_D$ and its bin, and an estimate of the fraction of observations "stuck" in the hole, $\alpha$, by returning the following objects: # zD notch1$zD # zD_bin notch1$zD_bin # alpha notch1$alpha $z_D$is estimated to be r notch1$zD and lies in bin r notch1$zD_bin, and the proportion of individuals who are unresponsive due to frictions is r notch1$alpha. Further, we can use notch1$zU_bin to get$z_u$, which was estimated at r notch1$zU_bin bins above $z^*$. ### Notch example 2: Effect of integration constraint - shifting from $z^*$ Vs $z_U$ As has been already mentioned, the integration constraint correction can be applied in two different ways. The main difference is whether we start shifting the counterfactual upwards from the bin above $z^*$, or the bin above $z_U$. Let's see the results of the first case, where we set correct = TRUE and rely on the default correct_above_zu = FALSE: notch2 <- bunchit(z_vector = bunching_data$notch_vector, zstar = 10000, binwidth = 50, bins_l = 40, bins_r = 40, poly = 4, t0=0.18, t1=.25, correct = TRUE, notch = TRUE, p_b = TRUE, p_b_e_xpos = 9000, seed = 1, p_title = "Notch with correction from zstar") notch2$plot Now let's see what happens if we instead set correct_above_zu = TRUE: notch3 <- bunchit(z_vector = bunching_data$notch_vector, zstar = 10000, binwidth = 50, bins_l = 40, bins_r = 40, poly = 5, t0=0.18, t1=.25, correct = TRUE, notch = TRUE, correct_above_zu = TRUE, p_b = TRUE, p_b_e_xpos = 9000, seed = 1, p_title = "Notch with correction from zU") notch3$plot In this case, we find that the counterfactual distribution is shifted much more. This does not affect the estimate of $b$ by much because the counterfactual at $z^$ remains stable. It can however make the bootstrapped estimates unstable, blowing up the standard errors. Furthermore, it can decrease $\alpha$ because it shifts the counterfactual significantly upwards and increases the missing mass. This impact is expected in cases where $z_U$ is much higher than $z^$, because it approaches the limit of the bandwidth and forces the same mass to be accounted for by fewer bins, shifting the counterfactual to the right of $z_U$ to much higher levels than otherwise, which also pulls it up for bins between $z^$ and $z_U$. In this case, it may be better to consider shifting up from $z^$ (the default), or forcing $z_U$ to some value based on visual inspection. Finally, note that the last run returns a warning: \textit{estimated zD (upper bound of dominated region) is larger than estimated marginal buncher's counterfactual z level} \textit{Are you sure this is a notch?} \textit{If yes, check your input choices for t0, t1, and force_notch.} This is telling us that with this last type of correction, the results imply that $z_D > z^ + \Delta z^$, which cannot be true. The user should take the warnings (and their suggestions) seriously. ## Further optional parameter inputs in bunchit() related to plot's appearance All parameters associated with the plot are prefixed with p_: • p_title: Title displayed in the plot. Default is empty • p_xtitle: x-axis title. Default is the name of the analyzed vector • p_ytitle: y-axis title. Default is "Count" • p_title_size: Size of plot title. Default is 9 • p_axis_title_size: Size of x- and y-axis titles. Default is 9 • p_axis_val_size: Size of x- and y-axis value labels. Default is 7.5 • p_miny: Minimum value of y-axis. Default is 0 • p_maxy: Maximum value of y-axis. Default is set internally • p_ybreaks: y-axis value(s) at which to add horizontal line markers. Default is optimized internally • p_freq_color: Color of the frequency line. Default is "black" • p_cf_color: Color of the counterfactual line. Default is "maroon" • p_zstar_color: Color of the vertical line marking $z^*$. Default is "red" • p_grid_major_y_color: Color of the y-axis major-y line marker. Default is "lightgrey" • p_freq_size: Thickness of frequency line. Default is 0.5 • p_freq_msize: Size of frequency line markers. Default is 1 • p_cf_size: Thickness of counterfactual line. Default is 0.5 • p_zstar_size: Thickness of vertical line marking $z^*$. Default is 0.5 • p_b: Whether to show the normalized bunching estimate on the plot. Default is FALSE • p_e: Whether to show the elasticity estimate on the plot. Default is FALSE • p_b_e_xpos: x-coordinate of bunching/elasticity estimate on plot. Default is optimized internally • p_b_e_ypos: y-coordinate of bunching/elasticity estimate on plot. Default is optimized internally • p_b_e_size: Text size of bunching/elasticity estimate on plot. Default is 3 • p_domregion_color: Color of vertical line marking upper bound of dominated region (in notch case). Default is "blue" • p_domregion_ltype: Line type/style of vertical line marking upper bound of dominated region (in notch case). Default is "longdash". Any line type compatible with geom_vline() of ggplot2 will work (e.g. "dotted"). Let's see how to use these in the following examples. ### Editing plot options We will again analyze the original kink vector, but change the plot's appearance in the following ways. First, we'll edit the x- and y-axis titles to "Earnings" and "Bin Count" using p_xtitle = "Earnings" and p_ytitle = "Bin Count". Second, we'll drop the horizontal line markers by setting their color to white using p_grid_major_y_color = "white". Third, we'll increase the size of the plots and axis labels: we'll increase the title's size by setting p_title_size = 15, the axes' title size with p_axis_title_size = 13 and the axes' values' size with p_axis_val_size = 11. Further, we'll change some colors: the counterfactual line to red using p_cf_color = "red", the true density's color to a navy offshoot using a Hex value: p_freq_color = "#1A476F", and set the $z^*$ marker to black using p_zstar_color = "black". Next, let's change the frequency line's thickness using p_freq_size = .8, and increase the size of its markers with p_freq_msize = 1.5. We'll also set the minimum y-axis value to 200 and the maximum to 1200 using p_miny = 200 and p_maxy = 1200 but only label the values at 500 and 1000 using p_ybreaks = c(500,1000). Finally, we'll increase the size of the text showing the estimates of $b$ using p_b_e_size = 5, and change their x- and y-coordinates to p_b_e_xpos = 9500 and p_b_e_ypos = 1000. This is the resulting plot: kink_p <- bunchit(z_vector = bunching_data$kink, zstar = 10000, binwidth = 50, bins_l = 20, bins_r = 20, poly = 4, t0 = 0, t1 = .2, p_title = "Kink analysis", p_xtitle = "Earnings", p_ytitle = "Bin Count", p_title_size = 15, p_axis_title_size = 13, p_axis_val_size = 11, p_grid_major_y_color = "white", p_cf_color = "red", p_freq_color = "#1A476F", p_freq_size = .8, p_zstar_color = "black", p_freq_msize = 1.5, p_miny = 200, p_maxy = 1200, p_ybreaks = c(500,1000), p_b = TRUE, p_b_e_size = 5, p_b_e_xpos = 9500, p_b_e_ypos = 1000, seed = 1) kink_p$plot The plotting options are the same for kinks and notches. The only addition is that with notches, the user can also change the appearance (color and linetype) of the vertical line marking the upper range of the dominated region, $z_D$, through p_domregion_color and p_domregion_ltype. Let's set these to "black" and "dotted" respectively. Note that to remove this line completely, simply set p_domregion_color = "white". We'll also change some of the other settings (p_miny, p_maxy, p_ybreaks, etc.) to better match the notch output. The resulting plot is: notch_p <- bunchit(z_vector = bunching_data$notch_vector, zstar = 10000, binwidth = 50, bins_l = 40, bins_r = 40, poly = 5, t0=0.18, t1=.25, correct = FALSE, notch = TRUE, p_title = "Notch without correction", p_xtitle = "Earnings", p_ytitle = "Bin Count", p_title_size = 15, p_axis_title_size = 13, p_axis_val_size = 11, p_grid_major_y_color = "white", p_cf_color = "red", p_freq_color = "#1A476F", p_freq_size = .8, p_zstar_color = "black", p_freq_msize = 1.5, p_maxy = 2500, p_ybreaks = c(1000,2000), p_b = TRUE, p_b_e_size = 5, p_b_e_xpos = 8700, p_b_e_ypos = 1500, seed = 1, p_domregion_color = "black", p_domregion_ltype = "dotted") notch_p$plot ### Some final notes on plotting Please note that there can be a difference in the appearance of marker and font sizes between the plot, as viewed in RStudio, and its exported version, so you may need to experiment with a few settings to find the one that best matches your final document. If you require further flexibility than what is provided, you can instead build the whole plot from scratch. From the list of results returned from bunchit(), simply use cf for the estimated counterfactual, and columns bin and freq_orig from the data dataframe for the bins and per-bin count. # References Chetty, R., Friedman, J.N., Olsen, T. and Pistafferi, L. (2011) https://doi.org/10.1093/qje/qjr013 Clifford, S. and Mavrokonstantis, P. (2021) https://doi.org/10.1016/j.jpubeco.2021.104519 Mavrokonstantis, P. and Seibold, A. (2022) http://dx.doi.org/10.2139/ssrn.4127660 ## Try the bunching package in your browser Any scripts or data that you put into this service are public. bunching documentation built on Aug. 24, 2022, 5:07 p.m.
## Cryptology ePrint Archive: Report 2018/429 Amortized Complexity of Information-Theoretically Secure MPC Revisited Ignacio Cascudo and Ronald Cramer and Chaoping Xing and Chen Yuan Abstract: A fundamental and widely-applied paradigm due to Franklin and Yung (STOC 1992) on Shamir-secret-sharing based general $n$-player MPC shows how one may trade the adversary threshold $t$ against amortized communication complexity, by using a so-called packed version of Shamir's scheme. For e.g.~the BGW-protocol (with active security), this trade-off means that if $t + 2k -2 < n/3$, then $k$ parallel evaluations of the same arithmetic circuit on different inputs can be performed at the overall cost corresponding to a single BGW-execution. In this paper we propose a novel paradigm for amortized MPC that offers a different trade-off, namely with the size of the field of the circuit which is securely computed, instead of the adversary threshold. Thus, unlike the Franklin-Yung paradigm, this leaves the adversary threshold unchanged. Therefore, for instance, this paradigm may yield constructions enjoying the maximal adversary threshold $\lfloor (n-1)/3 \rfloor$ in the BGW-model (secure channels, perfect security, active adversary, synchronous communication). Our idea is to compile an MPC for a circuit over an extension field to a parallel MPC of the same circuit but with inputs defined over its base field and with the same adversary threshold. Key technical handles are our notion of reverse multiplication-friendly embeddings (RMFE) and our proof, by algebraic-geometric means, that these are constant-rate, as well as efficient auxiliary protocols for creating subspace-randomness'' with good amortized complexity. In the BGW-model, we show that the latter can be constructed by combining our tensored-up linear secret sharing with protocols based on hyper-invertible matrices á la Beerliova-Hirt (or variations thereof). Along the way, we suggest alternatives for hyper-invertible matrices with the same functionality but which can be defined over a large enough constant size field, which we believe is of independent interest. As a demonstration of the merits of the novel paradigm, we show that, in the BGW-model and with an optimal adversary threshold $\lfloor (n-1)/3 \rfloor$, it is possible to securely compute a binary circuit with amortized complexity $O(n)$ of bits per gate per instance. Known results would give $n \log n$ bits instead. By combining our result with the Franklin-Yung paradigm, and assuming a sub-optimal adversary (i.e., an arbitrarily small $\epsilon>0$ fraction below 1/3), this is improved to $O(1)$ bits instead of $O(n)$. Category / Keywords: cryptographic protocols / multiparty computation, amortization, information-theoretical security, multiplication-friendly embeddings Original Publication (with minor differences): IACR-CRYPTO-2018 Date: received 4 May 2018, last revised 6 Jun 2018 Contact author: ignacio at math aau dk Available format(s): PDF | BibTeX Citation Note: Accepted to CRYPTO 2018. This is the final version submitted by the authors to the IACR and to Springer-Verlag on the 3rd June 2018. Short URL: ia.cr/2018/429 [ Cryptology ePrint archive ]
In this post, we demonstrate how deep reinforcement learning (deep RL) can be used to learn how to control dexterous hands for a variety of manipulation tasks. We discuss how such methods can learn to make use of low-cost hardware, can be implemented efficiently, and how they can be complemented with techniques such as demonstrations and simulation to accelerate learning. ## Why Dexterous Hands? A majority of robots in use today use simple parallel jaw grippers as manipulators, which are sufficient for structured settings like factories. However, manipulators that are capable of performing a wide array of tasks are essential for unstructured human-centric environments like the home. Multi-fingered hands are among the most versatile manipulators, and enable a wide variety of skills we use in our everyday life such as moving objects, opening doors, typing, and painting. Unfortunately, controlling dexterous hands is extremely difficult, which limits their use. High-end hands can also be extremely expensive, due to delicate sensing and actuation. Deep reinforcement learning offers the promise of automating complex control tasks even with cheap hardware, but many applications of deep RL use huge amounts of simulated data, making them expensive to deploy in terms of both cost and engineering effort. Humans can learn motor skills efficiently, without a simulator and millions of simulations. We will first show that deep RL can in fact be used to learn complex manipulation behaviors by training directly in the real world, with modest computation and low-cost robotic hardware, and without any model or simulator. We then describe how learning can be further accelerated by incorporating additional sources of supervision, including demonstrations and simulation. We demonstrate learning on two separate hardware platforms: an inexpensive custom-built 3-fingered hand (the Dynamixel Claw), which costs under \$2500, and the higher-end Allegro hand, which costs about \$15,000. Left: Dynamixel Claw. Right: Allegro Hand. ## Model-free Reinforcement Learning in the Real World Deep RL algorithms learn by trial and error, maximizing a user-specified reward function from experience. We’ll use a valve rotation task as a working example, where the hand must open a valve or faucet by rotating it 180 degrees. The reward function simply consists of the negative distance between the current and desired valve orientation, and the hand must figure out on its own how to move to rotate it. A central challenge in deep RL is in using this weak reward signal to find a complex and coordinated behavior strategy (a policy) that succeeds at the task. The policy is represented by a multilayer neural network. This typically requires a large number of trials, so much so that the community is split on whether deep RL methods can be used for training outside of simulation. However, this imposes major limitations on their applicability: learning directly in the real world makes it possible to learn any task from experience, while using simulators requires designing a suitable simulation, modeling the task and the robot, and carefully adjusting their parameters to achieve good results. We will show later that simulation can accelerate learning substantially, but we first demonstrate that existing RL algorithms can in fact learn this task directly on real hardware. A variety of algorithms should be suitable. We use Truncated Natural Policy Gradient to learn the task, which requires about 9 hours on real hardware. Learning progress of the dynamixel claw on valve rotation. Final Trained Policy on valve rotation. The direct RL approach is appealing for a number of reasons. It requires minimal assumptions, and is thus well suited to autonomously acquire a large repertoire of skills. Since this approach assumes no information other than access to a reward function, it is easy to relearn the skill in a modified environment, for example when using a different object or a different hand – in this case, the Allegro hand. 360° valve rotation with Allegro Hand. The same exact method can learn to rotate the valve when we use a different material. We can learn how to rotate a valve made out of foam. This can be quite difficult to simulate accurately, and training directly in the real world allows us to learn without needing accurate simulations. Dynamixel claw rotating a foam screw. The same approach takes 8 hours to solve a different task, which requires flipping an object 180 degrees around the horizontal axis, without any modification. Dynamixel claw flipping a block. These behaviors were learned with low cost hardware (<\\$2500) and a single consumer desktop computer. ## Accelerating Learning with Human Demonstrations While model-free RL is extremely general, incorporating supervision from human experts can help accelerate learning further. One way to do this, which we describe in our paper on Demonstration Augmented Policy Gradient (DAPG), is to incorporate human demonstrations into the reinforcement learning process. Related approaches have been proposed in the context of off-policy RL, Q-learning, and other robotic tasks. The key idea behind DAPG is that demonstrations can be used to accelerate RL in two ways 1. Provide a good initialization for the policy via behavior cloning. 2. Provide an auxiliary learning signal throughout the learning process to guide exploration using a trajectory tracking auxiliary reward. The auxiliary objective during RL prevents the policy from diverging from the demonstrations during the RL process. Pure behavior cloning with limited data is often ineffective in training successful policies due to distribution drift and limited data support. RL is crucial for robustness and generalization and use of demonstrations can substantially accelerate the learning process. We previously validated this algorithm in simulation on a variety of tasks, shown below, where each task used only 25 human demonstrations collected in virtual reality. DAPG enables a speedup of up to 30x on these tasks, while also learning natural and robust behaviors. Behaviors learned in simulation with DAPG: object pickup, tool use, in-hand, door opening. Behaviors robust to size and shape variations; natural and smooth behavior. In the real world, we can use this algorithm with the dynamixel claw to significantly accelerate learning. The demonstrations are collected with kinesthetic teaching, where a human teacher moves the fingers of the robots directly in the real world. This brings down the training time on both tasks to under 4 hours. Left: Valve rotation policy with DAPG. Right: Flipping policy with DAPG. Learning Curves of RL from scratch on hardware vs DAPG. Demonstrations provide a natural way to incorporate human priors and accelerate the learning process. Where high quality successful demonstrations are available, augmenting RL with demonstrations has the potential to substantially accelerate RL. However, obtaining demonstrations may not be possible for all tasks or robot morphologies, necessitating the need to also pursue alternate acceleration schemes. ## Accelerating Learning with Simulation A simulated model of the task can help augment the real world data with large amounts of simulated data to accelerate the learning process. For the simulated data to be representative of the complexities of the real world, randomization of various simulation parameters is often necessitated. This kind of randomization has previously been observed to produce robust policies, and can facilitate transfer in the face of both visual and physical discrepancies. Our experiments also suggest that simulation to reality transfer with randomization can be effective. Policy for valve rotation transferred from simulation using randomization. Transfer from simulation has also been explored in concurrent work for dexterous manipulation to learn impressive behaviors, and in a number of prior works for tasks such as picking and placing, visual servoing, and locomotion. While simulation to real transfer enabled by randomization is an appealing option, especially for fragile robots, it has a number of limitations. First, the resulting policies can end up being overly conservative due to the randomization, a phenomenon that has been widely observed in the field of robust control. Second, the particular choice of parameters to randomize is crucial for good results, and insights from one task or problem domain may not transfer to others. Third, increasing the amount of randomization results in more complex models tremendously increasing the training time and required computational resources (Andrychowicz et al report 100 years of simulated experience, training in 50 hours on thousands of CPU cores). Directly training in the real world may be more efficient and lead to better policies. Finally, and perhaps most importantly, an accurate simulator must be constructed manually, with each new task modeled by hand in the simulation, which requires substantial time and expertise. However, leveraging simulations appropriately can accelerate the learning, and more systematic transfer methods are an important direction for future work. ## Accelerating Learning with Learned Models In some of our previous work, we also studied how learned dynamics models can accelerate real-world reinforcement learning without access to manually engineered simulators. In this approach, local derivatives of the dynamics are approximated by fitting time-varying linear systems, which can then be used to locally and iterative improve a policy. This approach can acquire a variety of in-hand manipulation strategies from scratch in the real world. Furthermore, we see that the same algorithm can even learn to control a pneumatic soft robotic hand to perform a number of dexterous behaviors Left: Adroit robotic hand performing in-hand manipulation. Right: Pneumatic Soft RBO Hand performing dexterous tasks. However, the performance of methods with learned models is limited by the quality of the model that can be learned, and in practice asymptotic performance is often still higher with the best model-free algorithms. Further study of model-based reinforcement learning for efficient and effective real-world learning is a promising research direction. ## Takeaways and Challenges While training in the real world is general and broadly applicable, it has several challenges of its own. 1. Due to the requirement to take a large number of exploratory actions, we observed that the hands often heat up quickly, which requires pauses to avoid damage. 2. Since the hands must attempt the task multiple times, we had to build an automatic reset mechanism. In the future, a promising direction to remove this requirement is to automatically learn reset policies. 3. Reinforcement learning methods require rewards to be provided, and this reward must still be designed manually. Some of our recent work has looked at automating reward specification. However, enabling robots to learn complex skills directly in the real world is one of the best paths forward to developing truly generalist robots. In the same way that humans can learn directly from experience in the real world, robots that can acquire skills simply by trial and error can explore novel solutions to difficult manipulation problems and discover them with minimal human intervention. At the same time, the availability of demonstrations, simulators, and other prior knowledge can further reduce training times. The work in this post is based on these papers: A complete paper on the new robotic experiments will be released soon. The research was conducted by Henry Zhu, Abhishek Gupta, Vikash Kumar, Aravind Rajeswaran, and Sergey Levine. Collaborators on earlier projects include Emo Todorov, John Schulman, Giulia Vezzani, Pieter Abbeel, Clemens Eppner.
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 20 Nov 2018, 09:47 # How to Quickly Solve GMAT Questions - Chat with ExamPAL. Join HERE! ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History ## Events & Promotions ###### Events & Promotions in November PrevNext SuMoTuWeThFrSa 28293031123 45678910 11121314151617 18192021222324 2526272829301 Open Detailed Calendar • ### All GMAT Club Tests are Free and open on November 22nd in celebration of Thanksgiving Day! November 22, 2018 November 22, 2018 10:00 PM PST 11:00 PM PST Mark your calendars - All GMAT Club Tests are free and open November 22nd to celebrate Thanksgiving Day! Access will be available from 0:01 AM to 11:59 PM, Pacific Time (USA) • ### Free lesson on number properties November 23, 2018 November 23, 2018 10:00 PM PST 11:00 PM PST Practice the one most important Quant section - Integer properties, and rapidly improve your skills. # If a certain set of GMAT test score has a mean of 550 and a standard Author Message TAGS: ### Hide Tags Status: Preparing for GMAT Joined: 25 Nov 2015 Posts: 983 Location: India GPA: 3.64 If a certain set of GMAT test score has a mean of 550 and a standard  [#permalink] ### Show Tags 25 Aug 2018, 08:16 00:00 Difficulty: 15% (low) Question Stats: 94% (00:34) correct 6% (01:42) wrong based on 16 sessions ### HideShow timer Statistics If a certain set of GMAT test score has a mean of 550 and a standard deviation of 23, and Rob's score is within two standard deviations from the mean, which of the following CANNOT be Rob's score? A) 402 B) 543 C) 550 D) 583 E) 590 _________________ Please give kudos, if you like my post When the going gets tough, the tough gets going... Manager Joined: 10 May 2018 Posts: 128 Concentration: Finance, Sustainability Re: If a certain set of GMAT test score has a mean of 550 and a standard  [#permalink] ### Show Tags 25 Aug 2018, 08:39 If you stumbled upon this question and have no clue on how to go about it, go through the basic concepts on Standard Deviation (and Normal Distribution) here and then try solving the question again, this time you should get the right answer without going through the following explaination. Good luck! Since the mean is 550 and the Standard Deviation is 23 and Rob's score is within 2 Standard Deviations. 550-2(23) < Rob's Score < 550+2(23) [In words, two Standard Deviations span up to 46 on both sides of the mean.] 550-46 < Rob's Score < 550+46 504 < Rob's Score < 596 The answer choice A is the only option not lying in this range, hence it should be the answer. souvonik2k wrote: If a certain set of GMAT test score has a mean of 550 and a standard deviation of 23, and Rob's score is within two standard deviations from the mean, which of the following CANNOT be Rob's score? A) 402 B) 543 C) 550 D) 583 E) 590 Attachments File comment: Normal Distribution (clipped from the awesome GMATClub Math Book) 2018-08-25.png [ 173.43 KiB | Viewed 259 times ] _________________ Stuck in the 600-700 score bracket? I welcome you to read my four-step course of action to a modest score. I also invite you to critique and help me find flaws in my modus operandi. Thanks! Board of Directors Status: QA & VA Forum Moderator Joined: 11 Jun 2011 Posts: 4230 Location: India GPA: 3.5 Re: If a certain set of GMAT test score has a mean of 550 and a standard  [#permalink] ### Show Tags 25 Aug 2018, 10:13 souvonik2k wrote: If a certain set of GMAT test score has a mean of 550 and a standard deviation of 23, and Rob's score is within two standard deviations from the mean, which of the following CANNOT be Rob's score? A) 402 B) 543 C) 550 D) 583 E) 590 $$Given \ Range$$ = $$Mean$$ + $$2SD$$ Or, $$Given \ Range$$ = $$550$$ + $$2*23$$ Or, $$Given \ Range$$ = $$550$$ + $$46$$ So, The Given Range of score must be within $$550 + 46$$ to $$550 - 46$$ , ie between $$504$$ to $$596$$ Among the given options only (A) falls out of the range, hence Answer must be (A) _________________ Thanks and Regards Abhishek.... PLEASE FOLLOW THE RULES FOR POSTING IN QA AND VA FORUM AND USE SEARCH FUNCTION BEFORE POSTING NEW QUESTIONS How to use Search Function in GMAT Club | Rules for Posting in QA forum | Writing Mathematical Formulas |Rules for Posting in VA forum | Request Expert's Reply ( VA Forum Only ) Re: If a certain set of GMAT test score has a mean of 550 and a standard &nbs [#permalink] 25 Aug 2018, 10:13 Display posts from previous: Sort by
# Question #2d93b May 15, 2014 The equation used to relate object distance (${d}_{o}$), image distance (${d}_{i}$), and focal length ($f$)for a spherical mirror is: $\frac{1}{{d}_{o}} + \frac{1}{{d}_{i}} = \frac{1}{f}$ With convex mirrors, you must be careful because the focal length of the mirror is negative because it is behind the mirror. Also, you must make sure to have all your distances in the same units (here convert all the distances to meters). $\frac{1}{5 m e t e r s} + \frac{1}{{d}_{i}} = \frac{1}{- 0.1 m e t e r s}$ So $\frac{1}{{d}_{i}} = - \frac{51}{5} = - 10.2$ which means that ${d}_{i} = \frac{1}{-} 10.2 = - 0.098$ meters. The fact that the image distance is negative means that the image is formed behind the mirror, which will always be true for a convex mirror. Here is a link to a video showing how to do this (note that he uses s for ${d}_{o}$ and s' for ${d}_{i}$ and uses an equation that comes from rearranging the equation that I gave above):
## Stream: maths ### Topic: int is a ring #### Kevin Buzzard (Jun 28 2020 at 15:45): I'm so nearly there. Those sorries should be done by automation. Kenny told me an earlier simp fix which made simp not get stuck in a hole in the proof of add_assoc, but it's stuck in another one with the distrib lemmas. import tactic -- Kenny tells me there's a missing simp lemma. Without this I get an -- error on line 132; omega can't do add_assoc @[simp] theorem quotient.lift_on_beta₂ {α : Type} {β : Type} [setoid α] (f : α → α → β) (h) (x y : α) : ⟦x⟧.lift_on₂ ⟦y⟧ f h = f x y := rfl namespace int3 notation ℕ² := ℕ × ℕ namespace natsquared notation first := prod.fst notation second := prod.snd @[ext] lemma ext {a b : ℕ²} : first a = first b → second a = second b → a = b := by tidy lemma ext_iff (a b : ℕ²) : a = b ↔ first a = first b ∧ second a = second b := ⟨λ h, by cases h; simp, λ ⟨p, q⟩, ext p q⟩ instance : has_zero ℕ² := ⟨(0, 0)⟩ @[simp] lemma first_zero : first (0 : ℕ²) = 0 := by refl @[simp] lemma second_zero : second (0 : ℕ²) = 0 := by refl def has_one : has_one ℕ² := ⟨(1, 0)⟩ local attribute [instance] has_one @[simp] lemma first_one : first (1 : ℕ²) = 1 := by refl @[simp] lemma second_one : second (1 : ℕ²) = 0 := by refl def r (a b : ℕ²) := first a + second b = second a + first b instance : has_equiv ℕ² := ⟨r⟩ namespace r theorem refl (a : ℕ²) : a ≈ a := begin change first a + second a = second a + first a, -- if you delete the line above, the line below still works end theorem symm (a b : ℕ²) : a ≈ b → b ≈ a := begin intro hab, unfold has_equiv.equiv at *, rw [r] at *, omega, end theorem trans (a b c : ℕ²) : a ≈ b → b ≈ c → a ≈ c := begin intro hab, intro hbc, unfold has_equiv.equiv at *, rw [r] at *, omega, end theorem equiv : equivalence r := ⟨refl, symm, trans⟩ end r instance : setoid ℕ² := { r := r, iseqv := r.equiv } end natsquared local attribute [instance] natsquared.has_one -- the canonical 1 is another one! -- definition of int as quotient type notation ℤ3 := quotient natsquared.setoid -- theorem! It's a ring! def zero : ℤ3 := ⟦0⟧ instance : has_zero ℤ3 := ⟨zero⟩ @[simp] lemma zero.thing0 : (0 : ℤ3) = ⟦0⟧ := rfl def one : ℤ3 := ⟦1⟧ instance : has_one ℤ3 := ⟨one⟩ @[simp] lemma one.thing0 : (1 : ℤ3) = ⟦1⟧ := rfl @[simp] lemma one.first : first (1 : ℕ²) = 1 := by refl @[simp] lemma one.second : second (1 : ℕ²) = 0 := by refl open natsquared @[simp] lemma thing (a b : ℕ²) : a ≈ b ↔ first a + second b = second a + first b := iff.rfl @[simp] def add (a b : ℤ3) : ℤ3 := quotient.lift_on₂ a b (λ z w, ⟦(first z + first w, second z + second w)⟧) begin intros, simp at *, omega, end @[simp] lemma thing2 (a b : ℤ3) : a + b = add a b := rfl @[simp] def neg (a : ℤ3) : ℤ3 := quotient.lift_on a (λ b, ⟦(second b, first b)⟧) begin intros, simp at *, omega end instance : has_neg ℤ3 := ⟨neg⟩ @[simp] lemma neg.thing0 (a : ℤ3) : -a = neg a := rfl begin intros a b c, apply quotient.induction_on₃ a b c, intros, simp * at *, omega, end, zero := 0, begin intro a, apply quotient.induction_on a, intros, simp * at *, omega end, begin intro a, apply quotient.induction_on a, intros, simp * at *, omega end, neg := has_neg.neg, begin intro a, apply quotient.induction_on a, intros, simp * at *, omega end, begin intros a b, apply quotient.induction_on₂ a b, intros, simp * at *, omega end } theorem useful (p q r s t u v w : ℕ) (h1 : p + u = q + t) (h2 : r + w = s + v) : p * r + q * s + (t * w + u * v) = p * s + q * r + (t * v + u * w) := begin have h3 : (p + u) * r = (q + t) * r, rw h1, rw [show u * r + (p * r + q * s + (t * w + u * v)) = p * r + u * r + q * s + t * w + u * v, by ring], rw h3, rw [show q * r + t * r + q * s + t * w + u * v = t * (r + w) + q * s + u * v + q * r, by ring], rw [show u * r + (p * s + q * r + (t * v + u * w)) = u * (r + w) + p * s + t * v + q * r, by ring], rw [show t * s + t * v + q * s + u * v + q * r = t * s + q * s + t * v + u * v + q * r, by ring], --uv cancels tv cancels qr cancels suffices : t * s + q * s = (p + u) * s, rw this, ring, rw h1, ring, end @[simp] def mul (a b : ℤ3) : ℤ3 := quotient.lift_on₂ a b (λ z w, ⟦(first z * first w + second z * second w, first z * second w + second z * first w)⟧) -- why is this well-defined? begin intros, simp at *, apply useful _ _ _ _ _ _ _ _ a_1 a_2, end instance : has_mul ℤ3 := ⟨mul⟩ @[simp] lemma thing3 (a b : ℤ3) : a * b = mul a b := rfl instance : comm_ring ℤ3 := { mul := (*), one := 1, mul_assoc := begin intros a b c, apply quotient.induction_on₃ a b c, intros, simp, ring end, one_mul := begin intro a, apply quotient.induction_on a, intros, simp, ring, end, mul_one := begin intro a, apply quotient.induction_on a, intros, simp, ring, end, left_distrib := begin intros a b c, apply quotient.induction_on₃ a b c, intros, simp, ring, sorry end, right_distrib := begin intros a b c, apply quotient.induction_on₃ a b c, intros, simp, ring, sorry end, mul_comm := begin intros a b, apply quotient.induction_on₂ a b, intros, simp, ring, end, } end int3 #### Kevin Buzzard (Jun 28 2020 at 15:47): State after simp (when I want omega or ring to kick in) in left_distrib and right_distrib is abc : ℤ3 a_1b_1c_1 : ℕ² ⊢ ⟦a_1⟧.lift_on₂ (add_comm_group.add ⟦b_1⟧ ⟦c_1⟧) (λ (z w : ℕ²), ⟦(z.fst * w.fst + z.snd * w.snd, z.fst * w.snd + z.snd * w.fst)⟧) mul._proof_1 = add_comm_group.add ⟦(a_1.fst * b_1.fst + a_1.snd * b_1.snd, a_1.fst * b_1.snd + a_1.snd * b_1.fst)⟧ ⟦(a_1.fst * c_1.fst + a_1.snd * c_1.snd, a_1.fst * c_1.snd + a_1.snd * c_1.fst)⟧ #### Bhavik Mehta (Jun 28 2020 at 15:48): left_distrib := begin intros a b c, apply quotient.induction_on₃ a b c, intros, apply quotient.sound, simp, ring, end, This works #### Bhavik Mehta (Jun 28 2020 at 15:48): I've found the trick for any of these is to use quotient.sound after the induction step Last updated: May 09 2021 at 09:11 UTC
# Conservation of energy in the breaking of glass How is energy conserved when I throw a heavy metal ball at a piece of glass and cause it to shatter? The ball starts off with significant kinetic energy and ends up with very little. The glass sees what I think can be a small change in potential energy, depending on how you position things, but otherwise, I still see humongous energy losses. Is it heat energy during the process of shattering? Or am I neglecting something else... EDIT: according to a comment on one of the answers, I decided to clarify/simplify. I said metal ball because it's not likely a metal ball will absorb much of the energy through a deformation. Chemical bonds should definitely be taken into account. Let us make an order-of-magnitude estimate: We take a square glass panel with length $L = 1\,\mathrm{m}$ and width $e= 1\, \mathrm{cm}$. We will compute the energy required to break it in two, considering only the chemical bond energy. What we need is the number of broken bonds and the energy of each bond. We can consider that each molecule occupies a sphere with a radius $$r \simeq 1 \,\mathrm{\dot{A}} = 10^{-10} \,\mathrm{m}.$$ The bond energy should be $$E_{bond} \simeq 1\,\mathrm{eV} = 1.6 10^{-19}\,\mathrm{J}.$$ So to break the panel, one has to break bonds. Their number can be computed by assuming that with a straight separation, we have broken the bonds on a surface $S_\mathrm{glass}= Le$. Each molecule occupies a surface $S_\mathrm{molec} \simeq r^2$, so the number of broken molecules is $$N \simeq \frac{Le}{r^2}.$$ Finally, $$E_\mathrm{broken} \simeq N E_\mathrm{bond} \simeq \frac{eL}{r^2}N.$$ We reach $$E_\mathrm{broken} \simeq \frac {10^{-2}}{10^{-20}} 1.610^{-19} \simeq 0.1 \,\mathrm{J}.$$ This might seem small. However, one never breaks the glass in one straight separation but in a spiderweb shape. The total crack length must be around $10L$. So $$E_\mathrm{breaking} \simeq 1 \,\mathrm{J},$$ which is the equivalent of a 100 g ball travelling at 40 km/h. • Disagree. Most balls would deformate much more than glass. So ball would soak up most of energy(force is the same, but path is longer). Energy soaked up by glass would be used up mostly for process described by you, but all energy from ball would just dissipate to heat and sound waves(minor part). – Vashu May 27 '18 at 5:04 • For sure. I didn't claim that it was the only phenomenon but that it had to be taken into account. It would be interesting to have an estimate of the heat loss during the process. I think that for an iron ball it should be quite small. However for a soccer-like ball I don't know – Cabirto May 27 '18 at 9:46 • Vashu I'm editing the question to allow the ignoring of the energy absorbed by the ball's deformation. It will make calculations satisfactory and simpler. – user196799 Jul 24 '18 at 10:18
# How can I improve this off-topic question about downloading anonymous type data? My question is flagged as off-topic: Lacks concrete context. How can I improve this question? The //... in the edit history is added to tell viewers that it might or might not have more properties, since it's against the rule, I removed it. But even without it, the code still works without error. ## 1 Answer After the edit it is no longer obvious that the code presented is shortened code. Accordingly I reopened the question. Do note that answers may address any and all parts of a question. That also implies that they may comment on things you did not even notice. To enable answerers to make observations about the code that you might have missed, it's important to present code as close to your production code as possible. That also plays into additional context you can provide for your code. For a fuller consideration of shortened code, you could check the explanatory meta-answer for the close reason "Lacks Concrete Context".
# Slanted Viviani, PWW ### Problem $ABC$ is an equilateral triangle. Cevians $AD,$ $BE,$ $CF$ are equal (and, for the esthetics' sake, equally inclined to the corresponding sides.) Points $M,N,P$ are on $BC,AC,AB,$ respectively, such that $OM\parallel AD,$ $ON\parallel BE,$ and $OP\parallel CF.$ Prove that $OM+ON+OP=AD.$ ### Proof The proof is supposed to be self-explanatory. In case of difficulties in interpretation, please, have a look at a more explicit variant. ### Acknowledgment The illustration is by Grégoire Nicollier. [an error occurred while processing this directive]
## June 28, 2005 ### Here a Pod, There a Pod iTunes 4.9 is out, with its previously-announced support for podcasts. The podcast directory at the iTunes Music Store is very extensive, and makes subscribing to a podcast the same one-click experience that iTMS users are accustomed to for purchasing music. My favourite radio station in the whole world was recently featured in the New York Times Magazine. But, alas, KCRW’s podcast are all talk, talk, talk. (What’s with that? Copyright issues?) Still they do have a groovy live MP3 stream, which iTunes supports quite nicely. #### Update: Apple has published the specification for its iTunes extensions to the RSS 2.0 format used by podcasters. If you submit your podcast feed for inclusion in the iTMS podcast directory, you’re supposed to use these extensions to embed the podcast metadata, which will be displayed by the Music Store. [via Sam Ruby] Posted by distler at 12:18 PM | Permalink | Followups (2) ## June 27, 2005 ### Topological G2 Sigma Models de Boer, Naqvi and Shomer have a very interesting paper, in which they claim to construct a topological version of the supersymmetric $\sigma$-model on a 7-manifold of $G_2$ holonomy. The construction is quite a bit more delicate than the usual topologically-twisted $\sigma$-model. The latter are local 2D field theories, in which the spins of the fields have been shifted in such a way that one of the (nilpotent) supercharges becomes a scalar. If you wish, you can think of them as a 3-stage process: 1. Start with the original “untwisted” $\sigma$-model. 2. Twist, to form a local, but nonunitary field theory. 3. Pass to the $Q$-cohomology, which finally yields a unitary theory (with, in fact, a finite-dimensional Hilbert space of states). In their construction, the observables (and, for that matter, the nilpotent “scalar” supercharge itself) are nonlocal operators, defined as projections onto particular conformal blocks in the underlying CFT. So there is no intermediate “step 2”, at least not one that is recognizable as a local field theory. ## June 26, 2005 I am absolutely convinced we did the right thing in Iraq. We’re making major progress there. The insurgency is in its last throes. We will crush them like the cockroaches that they are! ## June 20, 2005 ### Oh, Obama! Barack Obama continues to shine as the most impressive politician that either party has produced in a long time. Read his Knox College Commencement Address. Is there anyone else, who even comes close? Posted by distler at 12:39 AM | Permalink | Followups (4) ## June 19, 2005 ### Reconnection Probability Much of the recent resurgence of interest in cosmic strings has to do with the possibility that such strings might be string-theoretic in nature, and that their properties might be observationally-distinguishable from those of “ordinary” field-theoretic cosmic strings. ## June 17, 2005 ### Open WebKit Amid all the kerfuffle about Apple moving to Intel processors, one important piece of news got underplayed. WebKit1, the system framework used by Safari (and other MacOSX applications), is now open-source. This is great news, as it means faster bug fixes and greater community involvement in the development of Apple’s flagship browser. And, yes, there is a plan to implement MathML in WebKit (whose progress can be followed here). If this actually comes to fruition, I may finally have to dump Mozilla … 1 WebCore, based on KHTML, was open-source from the start. WebKit provides a higher-level API and contains a lot of functionality not in WebCore/KHTML. Posted by distler at 12:01 AM | Permalink | Followups (1) ## June 16, 2005 ### Ricci Flat Matt Headrick and Toby Wiseman have finally come out with their paper on Ricci-flat metrics on K3. I’ve talked to them a fair bit about it; in fact, I take some personal satisfaction in having urged Matt to pursue this work. The problem is as follows. Calabi conjectured (1954), and Yau proved (1976) the existence of a Ricci-flat metric for complex Kähler manifolds with trivial canonical class. But, when such manifolds are compact, they have no continuous isometries. So, though you know existence, actually finding such Ricci-flat metric — the solution to a complicated, nonlinear PDE — seemed impossibly hard. Matt and Toby figured out how to do this numerically, at least for manifolds with a lot of discrete symmetries. It’s an incredible computational tour de force, as well as having some interesting physics applications. Posted by distler at 7:06 PM | Permalink | Followups (1)
# Papers In here you can find the things that I have written. ### Publications: • (2017) with Yair Hayut, Spectra of uniformity, submitted. (arXiv) • (2017) The Bristol model: an abyss called a Cohen real, submitted. (arXiv) • (2016) Fodor’s lemma can fail everywhere, Acta Math. Hungar. (2018) 154:231. (Journal/arXiv/MathSciNet) • (2016) Iterating Symmetric Extensions, submitted. (arXiv) • (2015) with Yair Hayut, Restrictions on Forcings That Change Cofinalities, Arch. Math. Logic 55 (2016), 373-384. (Journal/arXiv/MathSciNet) • (2014) Embedding Orders Into The Cardinals With $\DC_\kappa$, Fund. Math. 226 (2014), 143-156. (Journal/arXiv/MathSciNet)
# Uncountably many norms such that no two are Lipschitz equivalent I am struggling with the following question: Is it possible to find uncountably many norms on $C[0,1]$ such that no two are Lipschitz equivalent? I had thought about trying to define norms for each real number r>1: $$\|f\|_r =\left( \int_0^1 f(x)^r \, dx \right)^{1/r}$$ but I think these are Lipschitz equivalent. Does anyone have any better ideas? • "but I don't think these are Lipschitz equivalent" <- But that is exactly what you want, isn't it? That no two of the norms are (Lipschitz) equivalent. [And what you wrote only gives you a norm for $r \geqslant 1$. But there are still uncountably many of those.] – Daniel Fischer Nov 6 '15 at 12:52 • Sorry, I meant to say I thought they were Lipschitz equivalent. Maybe they aren't? I can't find counterexamples though. – David Smith Nov 6 '15 at 13:54 • No two of these norms are (Lipschitz) equivalent. Hölder's inequality gives $\lVert f\rVert_r \leqslant \lVert f\rVert_s$ for $r < s$, so you need to show that you can't estimate $\lVert f\rVert_s \leqslant C\cdot \lVert f\rVert_r$ then. Play with a modification of $f_s(x) = x^{-1/s}$ (you need to modify that to get a continuous function). – Daniel Fischer Nov 6 '15 at 14:24 • Sophisticated answers! A lazier answer that might work (after I get my morning coffee) is $$\|f\|_r= \sup_{0\leq x\leq r}|f(x) | + \int_r^1|f(x)|\,dx$$ for $0<r<1$. It doesn't much to be a norm, although it takes a lot to be a useful norm. – B. S. Thomson Nov 6 '15 at 17:25 In addition to the norms suggested in comments, here is one more (inspired by B.S.Thomson): for $t\in [0,1]$, let $$\|f\|_t = |f(t)| + \int_0^1 |f(x)|\,dx$$ The integral term is only needed to make this a norm rather than a seminorm. The fact that these are mutually nonequivalent follows by considering $f_{a,n}(x)=\max(0,1-n|x-a|)$ which satisfies $$\|f_{a,n}\|_t \to 0,\quad t\ne a;\qquad \text{yet } \ \|f_{a,n}\|_a = 1$$
Put labels on a map. mf_label( x, var, col, cex = 0.7, overlap = TRUE, lines = TRUE, halo = FALSE, bg, r = 0.1, ... ) ## Arguments x object of class sf name(s) of the variable(s) to plot labels color labels cex if FALSE, labels are moved so they do not overlap. if TRUE, then lines are plotted between x,y and the word, for those words not covering their x,y coordinate If TRUE, then a 'halo' is printed around the text and additional arguments bg and r can be modified to set the color and width of the halo. halo color width of the halo further text arguments. ## Value No return value, labels are displayed. ## Examples mtq <- mf_get_mtq() mf_map(mtq) mf_label( x = mtq, var = "LIBGEO", halo = TRUE, cex = 0.8, overlap = FALSE, lines = FALSE )
# zbMATH — the first resource for mathematics Additional note on partial regularity of weak solutions of the Navier-Stokes equations in the class $$L^\infty (0,T,L^3(\Omega )^3)$$. (English) Zbl 1099.35089 Weak solutions to the classical nonhomogeneous Navier-Stokes problem in a bounded domain $$\Omega \subset \mathbb R^3$$ are considered. A simplified proof of a recent theorem estimating the number of singular points of any weak solution from $$L^\infty (0,T,L^3(\Omega )^3)$$ is given. ##### MSC: 35Q30 Navier-Stokes equations 35D10 Regularity of generalized solutions of PDE (MSC2000) 76D03 Existence, uniqueness, and regularity theory for incompressible viscous fluids 76D05 Navier-Stokes equations for incompressible viscous fluids ##### Keywords: Navier-Stokes equations; partial regularity Full Text: ##### References: [1] L. Caffarelli, R. Kohn and L. Nirenberg: Partial regularity of suitable weak solutions of the Navier-Stokes equations. Comm. Pure Appl. Math. 35 (1982), 771-831. · Zbl 0509.35067 [2] H. Kozono: Uniqueness and regularity of weak solutions to the Navier-Stokes equations. Lecture Notes Numer. Appl. Anal. 16 (1998), 161-208. · Zbl 0941.35065 [3] H. Kozono, H. Sohr: Remark on uniqueness of weak solutions to the Navier-Stokes equations. Analysis 16 (1996), 255-271. · Zbl 0864.35082 [4] J. Neustupa: Partial regularity of weak solutions to the Navier-Stokes equations in the class $$L^\infty (0,T,L^3(\Omega )^3)$$. J. Math. Fluid Mech. 1 (1999), 309-325. · Zbl 0949.35107 [5] Y. Taniuchi: On generalized energy inequality of the Navier-Stokes equations. Manuscripta Math. 94 (1997), 365-384. · Zbl 0896.35106 [6] R. Temam: Navier-Stokes Equations. North-Holland, Amsterdam-New York-Oxford, 1977. · Zbl 0383.35057 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
[C++] Arrays and contiguity 27 replies to this topic #1poita Senior Member • Members • 322 posts Posted 10 November 2009 - 12:49 PM Are objects in an array guaranteed to be stored contiguously in memory? In particular: struct Vec3 { float x, y, z; }; Vec3 vs[2]; assert(&(vs[0].z)+1 == &(vs[1].x)); Essentially I want to know if I can reliably pass &vs[0] as an argument for glVertexPointer using this array of vectors. I'm suspecting that I can, but I'm worrying about potential alignment issues. #2.oisyn DevMaster Staff • Moderators • 1842 posts Posted 10 November 2009 - 01:42 PM poita said: Are objects in an array guaranteed to be stored contiguously in memory? Well, yes, but I assume you mean "members of an object" rather than "objects in an array"? The point is, Vec3 might contain padding at the end, which means it is officially not layout compatible with an array of floats. Quote In particular: struct Vec3 { float x, y, z; }; Vec3 vs[2]; assert(&(vs[0].z)+1 == &(vs[1].x)); Essentially I want to know if I can reliably pass &vs[0] as an argument for glVertexPointer using this array of vectors. I'm suspecting that I can, but I'm worrying about potential alignment issues. Theoretically not. Practically, however, I have never seen a compiler in which the assert would trigger. But it's not guaranteed. - Currently working on: the 3D engine for Tomb Raider. #3rouncer Senior Member • Members • 2758 posts Posted 10 November 2009 - 01:49 PM Wouldnt it be heaps harder (and run alot slower) to write the programming language if this wasnt true? you used to be able to fit a game on a disk, then you used to be able to fit a game on a cd, then you used to be able to fit a game on a dvd, now you can barely fit one on your harddrive. #4.oisyn DevMaster Staff • Moderators • 1842 posts Posted 10 November 2009 - 02:00 PM rouncer said: Wouldnt it be heaps harder (and run alot slower) to write the programming language if this wasnt true? That's not the point. The point is that C or C++ doesn't guarantee it, so it's up to the compiler implementors how they implement it. Btw, it's not so unthinkable for it to happen: a compiler might align Vec3 on 16 bytes for efficiency reasons. And it could, as the standard doesn't say it's illegal. (Btw I was originally wrong and have edited my post ) - Currently working on: the 3D engine for Tomb Raider. #5rouncer Senior Member • Members • 2758 posts Posted 10 November 2009 - 02:09 PM Which makes your choice of compiler important. you used to be able to fit a game on a disk, then you used to be able to fit a game on a cd, then you used to be able to fit a game on a dvd, now you can barely fit one on your harddrive. #6SyntaxError Valued Member • Members • 139 posts Posted 10 November 2009 - 03:33 PM poita said: struct Vec3 { float x, y, z; }; Vec3 vs[2]; assert(&(vs[0].z)+1 == &(vs[1].x)); This isn't guaranteed to work. If your machine (and/or machine with a given structure) pads to double it will fail. I wouldn't rely on it. Edit: As a side note arrays are guaranteed to be contiguous. The problem isn't with the array. The problem is there may be extra padding at the end of each structure. The compiler adds it for performance reasons. There may be flags to turn this off. I believe the Pentium will load a misaligned float or double correctly with some performance penalty however on many architectures a program will throw a bus error. If it lands on you will be crushed.....OK sorry. In any case just avoid it. #7rouncer Senior Member • Members • 2758 posts Posted 10 November 2009 - 04:05 PM If the compiler was made less confusing, what he says is true. you used to be able to fit a game on a disk, then you used to be able to fit a game on a cd, then you used to be able to fit a game on a dvd, now you can barely fit one on your harddrive. #8JarkkoL Senior Member • Members • 477 posts Posted 10 November 2009 - 05:02 PM .oisyn said: a compiler might align Vec3 on 16 bytes for efficiency reasons. Highly unlikely without forcing it explicitly in code. If it would happen in that specific case it would break pretty much all the code in the whole universe ;) #9poita Senior Member • Members • 322 posts Posted 11 November 2009 - 12:13 AM Quote Well, yes, but I assume you mean "members of an object" rather than "objects in an array"? The point is, Vec3 might contain padding at the end, which means it is officially not layout compatible with an array of floats. Well, "contiguously" means "without gaps in between". If there was padding in between objects (due to alignment) then they wouldn't be contiguous. I guess it's arguable whether the 4 bytes of padding at the end are part of the object or not, but that's what I was getting at :) I guess the real question is, should I rely on it? Some of you are saying I should and others that I shouldn't. #10SyntaxError Valued Member • Members • 139 posts Posted 11 November 2009 - 03:17 AM poita said: Well, "contiguously" means "without gaps in between". If there was padding in between objects (due to alignment) then they wouldn't be contiguous. I guess it's arguable whether the 4 bytes of padding at the end are part of the object or not, but that's what I was getting at :) I guess the real question is, should I rely on it? Some of you are saying I should and others that I shouldn't. I don't think it's all that arguable. Although I haven't checked the standard in my experience the padding is not between objects, it's part of the object. Take the sizeof() of your object and you will see this. Therefore the objects themselves are contiguous. As an example I compiled this code in Visual C++: #include "stdafx.h" #include <stdio.h> struct Vec3 { float x, y, z; }; struct Vec4 { double x; float y; }; int main() { printf("sizeof(Vec3)=%d\n",sizeof(Vec3)); printf("sizeof(Vec4)=%d\n",sizeof(Vec4)); return 0 ; } results are: sizeof(Vec3)=12 sizeof(Vec4)=16 The space required should be the same but because the compiler does double alignment for the Pentium to avoid bus performance issues, the size of the Vec4 is larger. I worked at intel for over 20 years and we compiled code on a tons of different workstations from different venders. Trust me, you shouldn't use pointers like that if you want things to be portable. #11poita Senior Member • Members • 322 posts Posted 11 November 2009 - 03:33 AM Ok, fair enough. Well that sucks. Guess I'll just have to write a wrapper for vertex, normal, and UV arrays :/ #12rouncer Senior Member • Members • 2758 posts Posted 11 November 2009 - 08:28 AM You pad manually, it doesnt happen automagically... Imagine what it would be like if we wasted disk space all the time this way. you used to be able to fit a game on a disk, then you used to be able to fit a game on a cd, then you used to be able to fit a game on a dvd, now you can barely fit one on your harddrive. #13JarkkoL Senior Member • Members • 477 posts Posted 11 November 2009 - 01:03 PM Don't worry, for that case what you had with Vec3, it's perfectly fine in practice. Just add compile-time assert to verify that the size is sizeof(float)*3 and you are safe. #14.oisyn DevMaster Staff • Moderators • 1842 posts Posted 11 November 2009 - 02:31 PM poita said: Well, "contiguously" means "without gaps in between". If there was padding in between objects (due to alignment) then they wouldn't be contiguous. I guess it's arguable whether the 4 bytes of padding at the end are part of the object or not, but that's what I was getting at Well, that's just the point. The padding *is* in fact part of the object according to the standard, that's not even debatable . If you do sizeof(Vec3), it includes the padding (very early GCC compilers bugged in this respect). This is why I misinterpreted your post the first time I read it Quote I guess the real question is, should I rely on it? Some of you are saying I should and others that I shouldn't. Given modern day compilers, I'd say you can rely on it. But if you want to be truely and utterly portable for all possible compilers that exist now and will exist in the future, you shouldn't. JarkkoL said: Highly unlikely without forcing it explicitly in code. If it would happen in that specific case it would break pretty much all the code in the whole universe Again, that's besides the point. The point is that a compiler *could* do that. The fact that there are no popular compilers compilers currently doing that changes nothing. - Currently working on: the 3D engine for Tomb Raider. #15.oisyn DevMaster Staff • Moderators • 1842 posts Posted 11 November 2009 - 02:55 PM rouncer said: You pad manually, it doesnt happen automagically... Imagine what it would be like if we wasted disk space all the time this way. Disk space has nothing to do with this topic. And yes, we waste memory space all the time for efficiency reasons. That's why sizeof(Vec4) in SyntaxError's post is 16, even though it could have been 12. struct Foo { int a : 1; short b : 1; int c : 1; short d : 1; }; This Foo is 16 bytes, even though it only uses 4 bits so theoretically it could have been packed into a single byte. In short: yes, it does in fact very much so happen 'automagically' - Currently working on: the 3D engine for Tomb Raider. #16JarkkoL Senior Member • Members • 477 posts Posted 11 November 2009 - 03:15 PM Sure, compilers could do many other things too to render tons of C++ code useless which doesn't strictly follow the C++ standard, but there's a difference between theory and practice. It's the interest of compiler developers that existing code actually works on their compilers, so it changes things quite a bit (: Not saying that you should violate the standard just for sake of it (or to drive C++ zealots crazy even though that can be fun too ;)), but sometimes it's just way more practical to do so. #17.oisyn DevMaster Staff • Moderators • 1842 posts Posted 11 November 2009 - 03:25 PM For the record, I would've made the same assumption here as well. - Currently working on: the 3D engine for Tomb Raider. #18SyntaxError Valued Member • Members • 139 posts Posted 11 November 2009 - 05:55 PM JarkkoL said: Sure, compilers could do many other things too to render tons of C++ code useless which doesn't strictly follow the C++ standard, but there's a difference between theory and practice. It's the interest of compiler developers that existing code actually works on their compilers, so it changes things quite a bit (: Not saying that you should violate the standard just for sake of it (or to drive C++ zealots crazy even though that can be fun too ;)), but sometimes it's just way more practical to do so. In my view writing something that moves pointers between structures in this way is kind of crazy. Change or add one more field and it fails even in everyday VC++ . This is very much a function of the CPU architecture and compiler developers are merely trying to build their compilers to generate efficient code. This isn't really a fringe issue. I would hope that not too many programmers are relying on this behavior. I would personally consider it extremely poor programming practice and I'm somewhat curious why it is even necessary. I'm having trouble imagining a problem where this couldn’t easily be coded in a better way. Possibly the OP could give us a more detailed description of the actual problem being solved. #19rouncer Senior Member • Members • 2758 posts Posted 11 November 2009 - 06:06 PM So i gather counting the amount of variables in your structure in no way dictates final memory usage? you used to be able to fit a game on a disk, then you used to be able to fit a game on a cd, then you used to be able to fit a game on a dvd, now you can barely fit one on your harddrive. #20Reedbeta DevMaster Staff • 5340 posts • LocationSanta Clara, CA Posted 11 November 2009 - 06:57 PM In the sense that the size of a struct could be more than the sum of the sizes of its members, yes. reedbeta.com - developer blog, OpenGL demos, and other projects 1 user(s) are reading this topic 0 members, 1 guests, 0 anonymous users
axiom-developer [Top][All Lists] ## [Axiom-developer] [DistributedMultivariatePolynomial] (new) From: hemmecke Subject: [Axiom-developer] [DistributedMultivariatePolynomial] (new) Date: Tue, 21 Feb 2006 04:56:30 -0600 Changes http://wiki.axiom-developer.org/DistributedMultivariatePolynomial/diff -- The use of \begin{axiom} R := Expression Integer \end{axiom} as the coefficient domain in \begin{axiom} P := DistributedMultivariatePolynomial([x,y], R) \end{axiom} might lead to unexpected results due to the fact that the domain $R$ can contain arbitrary expressions (including the variable $x$). Take for example. \begin{axiom} a: P := x b: P := a/x \end{axiom} Although it might seem strange that the result is not equal to 1, Axiom behaved perfectly the way you told it to. If the interperter sees $a/x$, it knows the type of $a$ but not yet for $x$. So it looks for a function it can apply. It finds that if x is coerced to $R$ (Expression Integer) than there is a function in $P$, namely:: if R has Field then (p : %) / (r : R) == inv(r) * p By the way, in Axiom Expression Integer is considered to be a Field. \begin{axiom} R has Field \end{axiom} Thus $x$ is inverted (and now lies in $R$) and then multiplied with $a$. There is no further simplification done. The problematic thing is if the above expression ($a/x$) is not treated carefully enough. For example, by construction it should by now be clear that it has degree 1. \begin{axiom} degree b \end{axiom} And it should also be clear that the following two expressions result in different output. They are even stored differently in the internal structure of $P$. \begin{axiom} x*b (x::R)*b \end{axiom} For the first expression, $x$ is converted to the indeterminate $x$ of the polynomial ring $P$. The interpreter finds an appropriate function:: *: (%, %) -> % and applies it. In the second case, it is explicitly said that $x$ has to be considered as an element of $R$. The interpreter finds the function with a more appropriate signature, namely:: *: (R, %) -> % Be careful with something like that. \begin{axiom} d: P := x + (x::R)*1 \end{axiom} >From the above discussion it should be clear that this expression is that what >Axiom was told to do. Now, a polynomial in $n$ variables is a function (with finite support) from the domain of exponents $E=N^n$ (where $N$ is the non-negative integers) to the domain $R$ of coefficients. $$P = \bigoplus_{e \in E} R$$ With such an interpretation, $d$ has support (i.e. the set of elements $e \in E$ for which the coefficient of $d$ corresponding to $e$ is non-zero) $$\{ (1,0), (0,0) \}$$ and is therefore **not** equal to the polynomial $2x$ which has support $$\{ (1,0) \}.$$ I Axiom is asked to convert $d$ to an arbitrary expression (Expression Integer), it will convert both summands of $d$ to $R$ and as such they are, of course, equal. \begin{axiom} d::R \end{axiom} --
# eb's question at Yahoo! Answers regarding distance, parametric equations and slope #### MarkFL Staff member Here is the question: Distance formula problem? Sally and Hannah are lost in the desert. Sally is 5 km north and 3 km east of Hannah. At the same time, they both begin walking east. Sally walks at 2 km/hr and Hannah walks at 3 km/hr. 1. When will they be 10 km apart? 2. When will the line through their locations be perpendicular to the line through their starting locations? I found that answer for #1 is 11.66. #2 answer is 34/3 but I don't know how to solve for #2. Here is a link to the question: Distance formula problem? - Yahoo! Answers I have posted a link there to this topic so the OP can find my response. #### MarkFL Staff member Re: eb's question from Yahoo! Answers regarding distance, parametric equations and slope Hello eb, I would place Hannah initially at the origin (0,0), and Sally at (3,5). 1.) I would choose to represent the positions of Hannah and Sally parametrically. The unit of distance is km and the unit of time is hr. So, we may state: $$\displaystyle H(t)=\langle 3t,0 \rangle$$ $$\displaystyle S(t)=\langle 2t+3,5 \rangle$$ and so the distance between them at time $t$ is: $$\displaystyle d(t)=\sqrt{((2t+3)-3t)^2+(5-0)^2}=\sqrt{t^2-6t+34}$$ Now, letting $d(t)=10$ and solving for $0\le t$, we find: $$\displaystyle 10=\sqrt{t^2-6t+34}$$ Square both sides and write in standard form: $$\displaystyle t^2-6t-66=0$$ Use of the quadratic formula yields one non-negative root: $$\displaystyle t=3+5\sqrt{3}\approx11.66$$ 2.) Initially the slope of the line through their positions is $$\displaystyle \frac{5}{3}$$, and so we want to equate the slope of the line through their positions at time $t$ to the negative reciprocal of this as follows: $$\displaystyle \frac{5-0}{2t+3-3t}=-\frac{3}{5}$$ $$\displaystyle \frac{5}{t-3}=\frac{3}{5}$$ Cross-multiply: $$\displaystyle 3t-9=25$$ $$\displaystyle 3t=34$$ $$\displaystyle t=\frac{34}{3}$$ To eb and any other guests viewing this topic, I invite and encourage you to post other parametric equation/analytic geometry questions in our Pre-Calculus forum. Best Regards, Mark. Last edited: #### MarkFL Staff member Re: eb's question from Yahoo! Answers regarding distance, parametric equations and slope A new member to MHB, Niall, has asked for clarification: I'm a bit confused on how you got S(t) = (2t + 3, 5) and H(t) = (3t, 0) could you explain that? Hello Niall and welcome to MHB! The $y$-coordinate of both young ladies will remain constant as they are walking due east. Since they are walking at a constant rate, we know their respective $x$-coordinates will be linear functions, where the slope of the line is given by their speed, and the intercept is given by their initial position $x_0$, i.e. $$\displaystyle x(t)=vt+x_0$$ Sarah's speed is 2 kph and her initial position is 3, hence: $$\displaystyle S(t)=2t+3$$ Hannah's speed is 3 kph and her initial position is 0, hence: $$\displaystyle H(t)=3t+0=3t$$ Does this explain things clearly? If not, I will be happy to try to explain in another way.
# The $q_T$ subtraction method for top quark production at hadron colliders Bonciani, Roberto; Catani, Stefano; Grazzini, Massimiliano; Sargsyan, Hayk; Torre, Alessandro (2015). The $q_T$ subtraction method for top quark production at hadron colliders. European Physical Journal C - Particles and Fields, 75:581. ## Abstract We consider QCD radiative corrections to top-quark pair production at hadron colliders. We use the $q_T$ subtraction formalism to perform a fully-differential computation for this process. Our calculation is accurate up to the next-to-leading order in QCD perturbation theory and it includes all the flavour off-diagonal partonic channels at the next-to-next-to-leading order. We present a comparison of our numerical results with those obtained with the publicly available numerical programs MCFM and Top++. We consider QCD radiative corrections to top-quark pair production at hadron colliders. We use the $q_T$ subtraction formalism to perform a fully-differential computation for this process. Our calculation is accurate up to the next-to-leading order in QCD perturbation theory and it includes all the flavour off-diagonal partonic channels at the next-to-next-to-leading order. We present a comparison of our numerical results with those obtained with the publicly available numerical programs MCFM and Top++. ## Citations 2 citations in Web of Science® 2 citations in Scopus® ## Altmetrics Detailed statistics Item Type: Journal Article, refereed, original work 07 Faculty of Science > Physics Institute 530 Physics English 2015 12 Feb 2016 11:56 05 Apr 2016 20:04 Springer 1434-6044 Publisher DOI. An embargo period may apply. https://doi.org/10.1140/epjc/s10052-015-3793-y arXiv:1508.03585v2 Permanent URL: https://doi.org/10.5167/uzh-121651 Preview Content: Accepted Version Filetype: PDF Size: 234kB View at publisher Preview Content: Published Version Filetype: PDF Size: 679kB Licence: ## TrendTerms TrendTerms displays relevant terms of the abstract of this publication and related documents on a map. The terms and their relations were extracted from ZORA using word statistics. Their timelines are taken from ZORA as well. The bubble size of a term is proportional to the number of documents where the term occurs. Red, orange, yellow and green colors are used for terms that occur in the current document; red indicates high interlinkedness of a term with other terms, orange, yellow and green decreasing interlinkedness. Blue is used for terms that have a relation with the terms in this document, but occur in other documents. You can navigate and zoom the map. Mouse-hovering a term displays its timeline, clicking it yields the associated documents.
## GOOi, an Android-Oriented GUI Library browncoat Prole Posts: 1 Joined: Fri Feb 05, 2016 2:24 pm ### Re: GÖÖi, an Android-Oriented GUI Library Very cool. I'm looking forward to playing with this this weekend. I was wondering if there was some documentation? Specifically: The interesting thing is that Android devices come in all sizes. How would I go about making sure my GUI (Sorry, GÖÖi) looks good on any resolution? I'll probably try to make a Material Design style, to make it "really Androidy" alberto_lara Party member Posts: 331 Joined: Wed Oct 30, 2013 8:59 pm ### Re: GÖÖi, an Android-Oriented GUI Library Very cool. I'm looking forward to playing with this this weekend. Thanks for that. About the good looking in different resolutions, it depends in how you set the font size, border width, etc. you just need to base this measures in something like 1/50 of the screen width (or height), 1/100 * (width + height) / 2, or things like that. Official docs are not available yet. NOTE: Since the touch API recently changed, I need to fix some code (currently it doesn't work as expected). In the mean time, I invite you to see this other project, it uses GÖÖi (an old version) and I'm working on it right now: https://github.com/tavuntu/susse LÖVE Projects: GOOi, Süsse and Katsudö Thank you for taking the time to read this alberto_lara Party member Posts: 331 Joined: Wed Oct 30, 2013 8:59 pm ### Re: GÖÖi, an Android-Oriented GUI Library Since the touch API recently changed, I need to fix some code (currently it doesn't work as expected). Seems like this is fixed, it works now. Also, several changes were made in GÖÖi's looking, I want to point out this: most of the shapes were "smooth" because I was using a combination of "fill" and "line", this doesn't happen anymore, the problem with this was, when setting an alpha component to the color it looked ugly, like you can see in this joystick: Another good thing is, a lot of drawing code and calculations were removed thanks to the round rectangle support we have now, so this the default GÖÖi style now: And here are other themes: Please note: now we have a new property when setting a theme and it's called 'howRoundInternally' (number) if it's equals to 1, the shapes inside the components will be totally round, if equal 0 the shapes will be rectangular, this adds more flexibility to styles. The only exceptions are joysticks (always round) and spinners (the minus and plus buttons are little images, at least by now). Same applies for 'howRound' property, except this one works for the base of the component. Theres a small issue with joysticks, I'm checking it. LÖVE Projects: GOOi, Süsse and Katsudö Thank you for taking the time to read this alberto_lara Party member Posts: 331 Joined: Wed Oct 30, 2013 8:59 pm ### Re: GÖÖi, an Android-Oriented GUI Library Hi, new component added: Knobs, as an alternative to sliders. They change accoring to the mouse (or touch) y component. I think adding a datepicker component would be nice but since it's a little bit complex I'll take a look on that later. Thanks. (please note, I'm using msaa filter, if you don't use it GÖÖi can look a little rough, but just a little) gooi.love LÖVE Projects: GOOi, Süsse and Katsudö Thank you for taking the time to read this CapitalEx Prole Posts: 7 Joined: Fri Jul 24, 2015 2:33 am ### Re: GÖÖi, an Android-Oriented GUI Library Is there a way to get UI elements, like a panel, to move around after creation? ~~<Ɵ/\/\_: *snake noises* alberto_lara Party member Posts: 331 Joined: Wed Oct 30, 2013 8:59 pm ### Re: GÖÖi, an Android-Oriented GUI Library CapitalEx wrote:Is there a way to get UI elements, like a panel, to move around after creation? that's not supported but it surely is a good idea, I could start with draggable panels. LÖVE Projects: GOOi, Süsse and Katsudö Thank you for taking the time to read this xopxe Prole Posts: 4 Joined: Fri Apr 22, 2016 3:49 am ### Re: GÖÖi, an Android-Oriented GUI Library Hi, I'm trying GOOi on a laptop with touchscreen and it sort-of does not work: touching a widget is equivalent to hover the cursor with the mouse (it highlights), and there's no way to actually click or grab using the touchscreen. I'm using Ubuntu. And another question, does it support on multitouch on Android? (under Linux does not, related to the previous point) Nixola Inner party member Posts: 1940 Joined: Tue Dec 06, 2011 7:11 pm Location: Italy ### Re: GÖÖi, an Android-Oriented GUI Library The issue may be in your setup or in SDL itself; try running this command in Linux: Code: Select all mkdir /tmp/nixtest && cd /tmp/nixtest && echo "love.mousepressed = function(...) print('Mouse:', ...) end love.touchpressed = function(...) print('Touch:', ...) end" > main.lua && love . > /tmp/nixtest/output.txt Then look at the output.txt file you'll find in that folder; do touchpressed/mousepressed events properly fire? lf = love.filesystem ls = love.sound la = love.audio lp = love.physics li = love.image lg = love.graphics xopxe Prole Posts: 4 Joined: Fri Apr 22, 2016 3:49 am ### Re: GÖÖi, an Android-Oriented GUI Library You're right, I never get love.touchpressed events... Any idea what can be wrong? (I'm on Ubuntu 14.04 with gooi from git) alberto_lara Party member Posts: 331 Joined: Wed Oct 30, 2013 8:59 pm ### Re: GÖÖi, an Android-Oriented GUI Library Hi, sorry for the delay. I'm going to make some test and tell you, I've been updated the API so I need to try it on Android (I just put it on Github). The changes were basically: • No explicit id's are given now • More flexible contructors For instance, this was before: Code: Select all gooi.newButton("btn1", "Button text", 100, 100, 200, 30) and this is now: Code: Select all gooi.newButton("Button text", 100, 100, 200, 30) These are other ways of making a button (similar syntax for other components): Code: Select all gooi.newButton() gooi.newButton("A button") gooi.newButton("A button", 100, 100) gooi.newButton("A button", 100, 100, 150, 25) gooi.newButton({ text = "A button", x = 100, y = 100, w = 150, h = 25, orientation = "right", icon = "/imgs/icon.png" }) And I'm going to leave another example: Code for this: Code: Select all pGrid = gooi.newPanel(350, 290, 420, 290, "grid 10x3") -- Add in the specified cell: pGrid :setColspan(1, 1, 3)-- In row 1, col 1, cover 3 columns. :setRowspan(6, 3, 2) :setColspan(8, 2, 2) :setRowspan(8, 2, 3) gooi.newLabel({text = "(Grid Layout demo)", orientation = "center"}), gooi.newLabel({text = "Left label", orientation = "left"}), gooi.newLabel({text = "Centered", orientation = "center"}), gooi.newLabel({text = "Right", orientation = "right"}), gooi.newButton({text = "Left button", orientation = "left"}), gooi.newButton("Centered"), gooi.newButton({text = "Right", orientation = "right"}), gooi.newLabel({text = "Left label", orientation = "left", icon = imgDir.."coin.png"}), gooi.newLabel({text = "Centered", orientation = "center", icon = imgDir.."coin.png"}), gooi.newLabel({text = "Right", orientation = "right", icon = imgDir.."coin.png"}), gooi.newButton({text = "Left button", orientation = "left", icon = imgDir.."medal.png"}), gooi.newButton({text = "Centered", orientation = "center", icon = imgDir.."medal.png"}), gooi.newButton({text = "Right", orientation = "right", icon = imgDir.."medal.png"}), gooi.newSlider({value = 0.75}):bg("#00000000"):border(3, "#00ff00"):fg({255, 0, 0}), gooi.newCheck("Debug"):roundness(1, 1):bg({127, 63, 0, 200}):fg("#00ffff"):border(1, "#ffff00") :onRelease(function(c) pGrid.layout.debug = not pGrid.layout.debug end), gooi.newBar(0):roundness(0, 1):bg("#77ff00"):fg("#8800ff"):increaseAt(0.05), gooi.newSpinner(-10, 30, 3):roundness(.65, .8):bg("#ff00ff"), gooi.newJoy():roundness(0):border(1, "#000000", "rough"):bg({0, 0, 0, 0}), gooi.newKnob(0.2) ) LÖVE Projects: GOOi, Süsse and Katsudö Thank you for taking the time to read this ### Who is online Users browsing this forum: No registered users and 6 guests
# 4.1 Single-slit diffraction  (Page 2/5) Page 2 / 5 At the larger angle shown in part (c), the path lengths differ by $3\text{λ}\text{/}2$ for rays from the top and bottom of the slit. One ray travels a distance $\lambda$ different from the ray from the bottom and arrives in phase, interfering constructively. Two rays, each from slightly above those two, also add constructively. Most rays from the slit have another ray to interfere with constructively, and a maximum in intensity occurs at this angle. However, not all rays interfere constructively for this situation, so the maximum is not as intense as the central maximum. Finally, in part (d), the angle shown is large enough to produce a second minimum. As seen in the figure, the difference in path length for rays from either side of the slit is D sin $\theta$ , and we see that a destructive minimum is obtained when this distance is an integral multiple of the wavelength. Thus, to obtain destructive interference for a single slit    , $D\phantom{\rule{0.2em}{0ex}}\text{sin}\phantom{\rule{0.2em}{0ex}}\theta =m\lambda ,\phantom{\rule{0.2em}{0ex}}\text{for}\phantom{\rule{0.2em}{0ex}}m=±1,±2,±3,...\left(\text{destructive}\right),$ where D is the slit width, $\text{λ}$ is the light’s wavelength, $\theta$ is the angle relative to the original direction of the light, and m is the order of the minimum. [link] shows a graph of intensity for single-slit interference, and it is apparent that the maxima on either side of the central maximum are much less intense and not as wide. This effect is explored in Double-Slit Diffraction . ## Calculating single-slit diffraction Visible light of wavelength 550 nm falls on a single slit and produces its second diffraction minimum at an angle of $45.0\text{°}$ relative to the incident direction of the light, as in [link] . (a) What is the width of the slit? (b) At what angle is the first minimum produced? ## Strategy From the given information, and assuming the screen is far away from the slit, we can use the equation $D\phantom{\rule{0.2em}{0ex}}\text{sin}\phantom{\rule{0.2em}{0ex}}\theta =m\lambda$ first to find D , and again to find the angle for the first minimum ${\theta }_{1}.$ ## Solution 1. We are given that $\text{λ}=550\phantom{\rule{0.2em}{0ex}}\text{nm}$ , $m=2$ , and ${\theta }_{2}=45.0\text{°}$ . Solving the equation $D\phantom{\rule{0.2em}{0ex}}\text{sin}\phantom{\rule{0.2em}{0ex}}\theta =m\lambda$ for D and substituting known values gives $D=\frac{m\lambda }{\text{sin}\phantom{\rule{0.2em}{0ex}}{\theta }_{2}}=\frac{2\left(550\phantom{\rule{0.2em}{0ex}}\text{nm}\right)}{\text{sin}\phantom{\rule{0.2em}{0ex}}45.0\text{°}}=\frac{1100\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-9}\phantom{\rule{0.2em}{0ex}}\text{m}}{0.707}=1.56\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-6}\phantom{\rule{0.2em}{0ex}}\text{m}.$ 2. Solving the equation $D\phantom{\rule{0.2em}{0ex}}\text{sin}\phantom{\rule{0.2em}{0ex}}\theta =m\lambda$ for $\text{sin}\phantom{\rule{0.2em}{0ex}}{\theta }_{1}$ and substituting the known values gives $\text{sin}\phantom{\rule{0.2em}{0ex}}{\theta }_{1}=\frac{m\lambda }{D}=\frac{1\left(550\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-9}\phantom{\rule{0.2em}{0ex}}\text{m}\right)}{1.56\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-6}\phantom{\rule{0.2em}{0ex}}\text{m}}.$ Thus the angle ${\theta }_{1}$ is ${\theta }_{1}={\text{sin}}^{-1}0.354=20.7\text{°}.$ ## Significance We see that the slit is narrow (it is only a few times greater than the wavelength of light). This is consistent with the fact that light must interact with an object comparable in size to its wavelength in order to exhibit significant wave effects such as this single-slit diffraction pattern. We also see that the central maximum extends $20.7\text{°}$ on either side of the original beam, for a width of about $41\text{°}$ . The angle between the first and second minima is only about $24\text{°}$ $\left(45.0\text{°}-20.7\text{°}\right)$ . Thus, the second maximum is only about half as wide as the central maximum. as a free falling object increases speed what is happening to the acceleration photo electrons doesn't emmit when electrons are free to move on surface of metal why? What would be the minimum work function of a metal have to be for visible light(400-700)nm to ejected photoelectrons? give any fix value to wave length Rafi 40 cm into change mm 40cm=40.0×10^-2m =400.0×10^-3m =400mm. that cap(^) I have used above is to the power. Prema i.e. 10to the power -2 in the first line and 10 to the power -3 in the the second line. Prema there is mistake in my first msg correction is 40cm=40.0×10^-2m =400.0×10^-3m =400mm. sorry for the mistake friends. Prema 40cm=40.0×10^-2m =400.0×10^-3m =400mm. Prema this msg is out of mistake. sorry friends​. Prema what is physics? why we have physics because is the study of mater and natural world John because physics is nature. it explains the laws of nature. some laws already discovered. some laws yet to be discovered. Yoblaze is this a physics forum explain l-s coupling how can we say dirac equation is also called a relativistic equation in one word what is the electronic configration of Al what's the signeficance of dirac equetion.? what is the effect of heat on refractive index As refractive index depend on other factors also but if we supply heat on any system or media its refractive index decrease. i.e. it is inversely proportional to the heat. ganesh you are correct Priyojit law of multiple Wahid if we heated the ice then the refractive index be change from natural water Nepal can someone explain normalization condition Swati yes Chemist 1 millimeter is How many metres 1millimeter =0.001metre Gitanjali The photoelectric effect is the emission of electrons when light shines on a material.
### The scorecard Object JSON transcripts include the scorecard object as part of the top-level app_data object when the V‑Spark folder that processed the audio is configured to analyze transcripts using one or more applications. ### Important The scorecard object is generated by V‑Spark, and does not appear in V‑Blaze or V‑Cloud JSON transcripts. V‑Spark applications may have multiple levels of category, up to a maximum of 3 subcategories for each top-level category. The scorecard object uses the same nested structure. At its first level, the scorecard object contains one object for each application. Each application object contains a score element and a subcategories object. Each subcategories object contains lower-level subcategories and scores. The following table lists elements of the scorecard object: Table 1. Elements in the scorecard object Element Type Description APPLICATION-NAME object Represents an application, its score, and all of its subcategories in a subcategories object. score number The score value for the application overall. subcategories object Contains all subcategories below the first level. score number The score value for subcategories. subcategories object Contains all subcategories below the second level. score number The score value for the lowest-level subcategories. subcategories object An empty object, since this is the lowest level of application subcategory. The following JSON example shows a scorecard object from a transcript processed by the application WordsApp. One category, named Words, has the maximum number of subcategories. The other category, named MoreWords, has no subcategories. "scorecard": { "WordsApp": { "Words": { "subcategories": { "SubWords": { "subcategories": { "SubSubWords": { "subcategories": { "SubSubSubWords": { "subcategories": {}, "score": 1 }, "SubSubSubWords2": { "subcategories": {}, "score": 1 } }, "score": 1 } }, "score": 1 } }, "score": 1 }, "MoreWords": { "subcategories": {}, "score": 1 } } },
19 views The Fourier Transform of the signal $x(t)=\mathrm{e}^{-3 \mathrm{t}^{2}}$ is of the following form, where $A$ and $B$ are constants 1. $\mathrm{A} e^{-\mathrm{B}|f|}$ 2. $\mathrm{A} e^{-\mathrm{B} f}$ 3. $\mathrm{A}+\mathrm{B}|f|^{2}$ 4. $\mathrm{A} e^{-B f}$
Thermal emission and scattering by aligned grains: Plane-parallel model and application to multiwavelength polarization of the HL Tau disc ABSTRACT Telescopes are now able to resolve dust polarization across circumstellar discs at multiple wavelengths, allowing the study of the polarization spectrum. Most discs show clear evidence of dust scattering through their unidirectional polarization pattern typically at the shorter wavelength of $\sim 870 \, \mu$m. However, certain discs show an elliptical pattern at ∼3 mm, which is likely due to aligned grains. With HL Tau, its polarization pattern at ∼1.3 mm shows a transition between the two patterns making it the first example to reveal such transition. We use the T-matrix method to model elongated dust grains and properly treat scattering of aligned non-spherical grains with a plane-parallel slab model. We demonstrate that a change in optical depth can naturally explain the polarization transition of HL Tau. At low optical depths, the thermal polarization dominates, while at high optical depths, dichroic extinction effectively takes out the thermal polarization and scattering polarization dominates. Motivated by results from the plane-parallel slab, we develop a simple technique to disentangle thermal polarization of the aligned grains T0 and polarization due to scattering S using the azimuthal variation of the polarization fraction. We find that, with increasing wavelength, the fractional polarization spectrum of the scattering component S more » Authors: ; ; ; ; ; ; Award ID(s): Publication Date: NSF-PAR ID: 10365864 Journal Name: Monthly Notices of the Royal Astronomical Society Volume: 512 Issue: 3 Page Range or eLocation-ID: p. 3922-3947 ISSN: 0035-8711 Publisher: Oxford University Press The size of dust grains, a, is key to the physical and chemical processes in circumstellar discs, but observational constraints of grain size remain challenging. (Sub)millimetre continuum observations often show a per cent-level polarization parallel to the disc minor axis, which is generally attributed to scattering by ${\sim}100\, \mu{\rm m}$-sized spherical grains (with a size parameter x ≡ 2$\pi$a/λ < 1, where λ is the wavelength). Larger spherical grains (with x greater than unity) would produce opposite polarization direction. However, the inferred size is in tension with the opacity index β that points to larger mm/cm-sized grains. We investigate the scattering-produced polarization by large irregular grains with a range of x greater than unity with optical properties obtained from laboratory experiments. Using the radiation transfer code, RADMC-3D, we find that large irregular grains still produce polarization parallel to the disc minor axis. If the original forsterite refractive index in the optical is adopted, then all samples can produce the typically observed level of polarization. Accounting for the more commonly adopted refractive index using the DSHARP dust model, only grains with x of several (corresponding to ∼mm-sized grains) can reach the same polarization level. Our results suggest that grains in discs canmore » ABSTRACT Polarized dust continuum emission has been observed with Atacama Large Millimeter/submillimeter Array in an increasing number of deeply embedded protostellar systems. It generally shows a sharp transition going from the protostellar envelope to the disc scale, with the polarization fraction typically dropping from ${\sim } 5{{\ \rm per\ cent}}$ to ${\sim } 1{{\ \rm per\ cent}}$ and the inferred magnetic field orientations becoming more aligned with the major axis of the system. We quantitatively investigate these observational trends using a sample of protostars in the Perseus molecular cloud and compare these features with a non-ideal magnetohydrodynamic disc formation simulation. We find that the gas density increases faster than the magnetic field strength in the transition from the envelope to the disc scale, which makes it more difficult to magnetically align the grains on the disc scale. Specifically, to produce the observed ${\sim } 1{{\ \rm per\ cent}}$ polarization at ${\sim } 100\, \mathrm{au}$ scale via grains aligned with the B-field, even relatively small grains of $1\, \mathrm{\mu m}$ in size need to have their magnetic susceptibilities significantly enhanced (by a factor of ∼20) over the standard value, potentially through superparamagnetic inclusions. This requirement is more stringent for larger grains,more »
# A permutation group inducing a topologically transitive action without dense orbits on $\omega^*$ Let $$G$$ be a subgroup of the permutation group $$S_\omega$$ of the countable infinite set $$\omega$$. Each bijection $$g\in G$$ admits a unique extension to a homeomorphism $$\bar g$$ of the Stone-Cech compactification $$\beta\omega$$ of $$\omega$$. The homeomorphism $$\bar g$$ induces a homeomorphism of the remainder $$\omega^*=\beta\omega\setminus\omega$$ of the Stone-Cech compactification. So, we obtain a continuous action of the group $$G$$ on the compact Hausdorff space $$\omega^*$$. I am interested in properties of the obtained dynamical system $$(\omega^*,G)$$. Namely, I would like to know the answer to the following Problem. Is there a subgroup $$G\subseteq S_\omega$$ such that the dynamical system $$(\omega^*,G)$$ is topologically transitive (=each nonempty open set has dense orbit) but does not have a dense orbit. An example of such subgroup $$G$$ exists under the assumption $$\mathrm{non}(\mathcal M)<\mathfrak c$$. So, the question actually ask about the situation in ZFC. Remark. If a group $$G\subseteq S_\omega$$ induces a topologically transitive action on $$\omega^*$$, then $$G$$ has large cardinality, namely, $$|G|\ge\mathsf \Sigma\ge\max\{\mathfrak b,\mathfrak s,\mathrm{cov}(\mathcal M)\}$$. More information on the cardinal $$\mathsf \Sigma$$ can be found in this preprint. • Identify $\omega$ with a dense countable order $Q$. Let $G$ be the group of piecewise monotone permutations of $Q$ (i.e., cut $Q$ into finitely many convex pieces, and rearrange them to finitely many convex pieces through order preserving or reversing partial isomorphisms). I think $G$ acts topologically transitively on $\beta^*Q$. But I don't see if there's a dense orbit. – YCor Apr 2 at 20:32 • Note: For $G$ acting on $\omega$, it's clear that if for every $I,J\subset\omega$ with $J,\omega-I$ infinite there exists $g\in G$ such that $gI\subset^* J$, then $G$ acts minimally on $\beta^*\omega$. Here $\subset^*$ means inclusion modulo finite subset. In particular $S_\omega$ acts minimally on $\beta^*\omega$ (as you told me in answer to a comment of mine in your previous question; I'm just writing the easy argument to help the reader. – YCor Apr 2 at 20:35 • You are right the group of piecewise monotone functions acts topologically transitively on $Q^*$ (because each sequence contains a monotone subsequence). Concerning dense orbit, let me think a bit. – Taras Banakh Apr 2 at 20:48 • Yes. I needed piecewise, so as to handle the case of a bounded above increasing sequence, vs an unbounded above increasing sequence. – YCor Apr 2 at 20:49 • The action of the group $G$ on $Q^*$ will have many dense orbits: just take any ultrafilter living on a monotone sequence; its orbit will be dense. – Taras Banakh Apr 2 at 21:07 It turns out that this problem is independent of ZFC because of the following simple Theorem. Under $$\mathfrak t=\mathfrak c$$, every topologically transitive continuous action of a group $$G$$ on $$\omega^*$$ has a dense orbit. Proof. Let $$(A_\alpha)_{\alpha\in\mathfrak c}$$ be an enumeration of all infinite subsets of $$\omega$$. By transfinite induction we shall construct a transfinite sequence of infinite subsets $$(U_\alpha)_{\alpha\in\mathfrak c}$$ of $$\omega$$ and a transfinite sequence $$(g_\alpha)_{\alpha\in\mathfrak c}$$ of elements of the group $$G$$ such that for every $$\alpha\in\mathfrak c$$ the following conditions are satisfied: (a) $$U_\alpha\subseteq^* U_\beta$$ for all $$\beta<\alpha$$; (b) $$g_\alpha(U_\alpha)\subseteq^* A_\alpha$$. To start the inductive construction, put $$U_0=A_0$$ and $$g_0$$ be the identity of the group $$G$$. Assume that for some ordinal $$\alpha\in\mathfrak c$$, a transfinite sequence $$(U_\beta)_{\beta<\alpha}$$ satisfying the condition (a) has been constructed. By the definition of the tower number $$\mathfrak t$$ and the equality $$\mathfrak t=\mathfrak c>\alpha$$, there exists an infinite subset $$V_\alpha\subseteq\omega$$ such that $$V_\alpha\subseteq^* U_\beta$$ for all $$\beta<\alpha$$. The infinite sets $$V_\alpha$$ and $$A_\alpha$$ and determine clopen sets $$\overline V_\alpha=\{p\in\omega^*:V_\alpha\in p\}$$ and $$\bar A_\alpha=\{p\in\omega^*:A_\alpha\in p\}$$ in the space $$\omega^*=\beta\omega\setminus\omega$$. Since the action of the group $$G$$ on $$\omega^*$$ is topologically transitive, there exist $$g_\alpha$$ and an infinite subset $$U_\alpha\subset V_\alpha$$ such that $$g_\alpha(\overline U_\alpha)\subseteq \bar A_\alpha$$, which implies $$g_\alpha (U_\alpha)\subseteq^* A_\alpha$$. This completes the inductive step. Adter completing the inductive construction, extend the family $$\{U_\alpha\}_{\alpha\in\mathfrak c}$$ to a free ultrafilter $$\mathcal U$$ and observe that its orbit intersetcs each clopen set $$\bar A_\alpha$$, $$\alpha\in\mathfrak c$$ and hence is dense in $$\omega^*$$. $$\qquad\square$$ • Interesting. A follow-up question would be the same question (esp. in ZFC+CH) for $G\subset\mathrm{Homeo}(\beta^*\omega)$. – YCor Apr 3 at 10:58 • What is $\beta^*\omega$? If $\beta^*\omega=\omega^*$, then the theorem answer this question under $\mathfrak t=\mathfrak c$ and hence under CH. – Taras Banakh Apr 3 at 11:09 • Yes I mean $\beta^*X=\beta X-X$ (I'm boycotting the notation $X^*$ which is incomprehensible without context, and I like to retain the letter $\beta$ to denote the Stone-Cech remainder). Ah indeed I didn't notice your answer doesn't assume $G\subset S_\omega$, so is much more general than what you initially asked. – YCor Apr 3 at 11:16
## Devito CFD Tutorial series¶ The following series of notebook tutorials will demonstrate the use of Devito and it's SymPy-based API to solve a set of classic examples from Computational Fluid Dynamics (CFD). The tutorials are based on the excellent tutorial series CFD Python: 12 steps to Navier-Stokes by Lorena Barba and focus on the implementation with Devito rather than pure CFD or finite difference theory. For a refresher on how to implement 2D finite difference solvers for CFD problems, please see the original tutorial series here: http://lorenabarba.com/blog/cfd-python-12-steps-to-navier-stokes/ ### Example 1: Linear convection in 2D¶ Lets start with a simple 2D convection example - step 5 in the original blog. This will already allow us to demonstrate a lot about the use of Devito's symbolic data objects and how to use them to build a simple operator directly from the symbolic notation of the equation. The governing equation we will implement in this tutorial is: $$\frac{\partial u}{\partial t}+c\frac{\partial u}{\partial x} + c\frac{\partial u}{\partial y} = 0$$ In order to implement this equation we first discretize it using forward differences in time and backward differences in space. Just as the original tutorial, we will use $u_{i,j}^n$ to denote a finite difference stencil point with $i$ and $j$ denoting spatial indices and $n$ denoting the time index. So, after re-arragning the discretized equation for the forward stencil point in time we get $$u_{i,j}^{n+1} = u_{i,j}^n-c \frac{\Delta t}{\Delta x}(u_{i,j}^n-u_{i-1,j}^n)-c \frac{\Delta t}{\Delta y}(u_{i,j}^n-u_{i,j-1}^n)$$ Using this, we can start deriving the computational stencil for this equation. Let's first look at the original pure Python implementation of the linear convection flow - but first we import our tools and define some parameters: In [1]: from examples.cfd import plot_field, init_hat import numpy as np %matplotlib inline # Some variable declarations nx = 81 ny = 81 nt = 100 c = 1. dx = 2. / (nx - 1) dy = 2. / (ny - 1) print("dx %s, dy %s" % (dx, dy)) sigma = .2 dt = sigma * dx dx 0.025, dy 0.025 A small note on style: Throughout this tutorial series we will use utility functions to plot the various 2D functions and data sets we deal with. These are all taken from the original tutorial series, but have been slightly modified for our purposes. One of the differences readers might find is that the original series uses (y, x) indexing for 2d data arrays, whereas many of the examples have been adapted to use (x, y) notation in our tutorials. So, let's start by creating a simple 2D function and initialising it with a "hat function". We will use that initialisation function a lot, so it comes from our utility scripts: In [2]: #NBVAL_IGNORE_OUTPUT u = np.empty((nx, ny)) init_hat(field=u, dx=dx, dy=dy, value=2.) # Plot initial condition plot_field(u) Now we can repeat the pure NumPy solve from the original tutorial, where we use NumPy array operations to speed up the computation. Note that we skip the derivation of the stencil used to implement our convection equation, as we are going to walk though this process using the Devito API later in this tutorial. In [3]: # Repeat initialisation, so we can re-run the cell init_hat(field=u, dx=dx, dy=dy, value=2.) for n in range(nt + 1): # Copy previous result into a new buffer un = u.copy() # Update the new result with a 3-point stencil u[1:, 1:] = (un[1:, 1:] - (c * dt / dx * (un[1:, 1:] - un[1:, :-1])) - (c * dt / dy * (un[1:, 1:] - un[:-1, 1:]))) # Apply boundary conditions u[0, :] = 1. u[-1, :] = 1. u[:, 0] = 1. u[:, -1] = 1. In [4]: #NBVAL_IGNORE_OUTPUT # A small sanity check for auto-testing assert (u[45:55, 45:55] > 1.8).all() u_ref = u.copy() plot_field(u) Hooray, the wave moved! Now, this little example is already very concise from a notational point of view and it teaches us quite a few key points about how to perform finite difference stencil computation via NumPy: • Due to the backward differencing scheme in space (more later) we use only three stencil point in this example: $u^{n}_{i, j}$, $u^{n}_{i-1, j}$ and $u^{n}_{i, j-1}$. These can be identified in the code through the array indices and correspond to un[1:, 1:], un[:-1, 1:] and un[1:, :-1] respectively. • Two buffers for array data are used throughout: un[...] is read from, while u[...] is updated, where the line un = u.copy() performs a deep copy of the field to switch buffers between timesteps. Note that in some other finite difference tutorials the cost of this copy operation is sometimes amortised by using two pre-allocated buffers and switching the indices of them explicitly. • The final four lines within the loop code show us how to implement simple Dirichlet boundary condition by simply setting a value on the outermost rows and columns of our cartesian grid. You may have noticed that the hat has not only moved to a different location, but has also changed its shape into a smooth bump. This is a little surprising, as the correct solution of the convection equation would be movement without shape change. The smooth shape is caused by numerical diffusion, a well-known limitation of low-order finite difference schemes. We will discuss this issue and some solutions later in this tutorial. #### Devito implementation¶ Now we want to re-create the above example via a Devito operator. To do this, we can start by defining our computational grid and creating a function u as a symbolic devito.TimeFunction. The core thing to note here is that this is one of Devito's symbolic functions, which have a dual role in the creation of finite difference solvers: • They behave symbolically like sympy.Function objects, so that we can construct derivatives and use them in symbolic expressions, thus inheriting all the power of automated symbolic manipulation that SymPy provides. • They act as containers for user data by providing a .data property that wraps automatically allocated memory space in a neat NumPy array. The particular TimeFunction type that we will declare our variable $u$ as in this case is aware of the fact that we will want to implement a timestepping algorithm with it. So the object u will declare two buffers of shape (nx, ny) for us, as defined by the Grid object, and present them as u.data[0] and u.data[1]. Let's fill the initial buffer with some data and look at it: In [5]: #NBVAL_IGNORE_OUTPUT from devito import Grid, TimeFunction grid = Grid(shape=(nx, ny), extent=(2., 2.)) u = TimeFunction(name='u', grid=grid) init_hat(field=u.data[0], dx=dx, dy=dy, value=2.) plot_field(u.data[0]) Nice. Now we can look at deriving our 3-point stencil using the symbolic capabilities given to our function $u$ by SymPy. For this we will first construct our derivative terms in space and time. For the forward derivative in time we can easily use Devito's shorthand notation u.dt to denote the first derivative in time and u.dxl and u.dyl to denote the space derivatives. Note that the l means were using the "left" or backward difference here to adhere to the discretization used in the original tutorials. From the resulting terms we can then create a sympy.Equation object that contains the fully discretised equation, but from a neat high-level notation, as shown below. In [6]: from devito import Eq # Specify the interior flag so that the stencil is only # applied to the interior of the domain. eq = Eq(u.dt + c*u.dxl + c*u.dyl, subdomain=grid.interior) print(eq) Eq(1.0*u(t, x, y)/h_y - 1.0*u(t, x, y - h_y)/h_y + 1.0*u(t, x, y)/h_x - 1.0*u(t, x - h_x, y)/h_x - u(t, x, y)/dt + u(t + dt, x, y)/dt, 0) The above step resulted in a fully discretised version of our equation, which includes place-holder symbols for the spacing in time (s) and space (h). These symbols are based on an internal convention and will later be replaced when we build an operator. But before we can build an operator, we first need to change our discretised expression so that we are updating the forward stencil point in our timestepping scheme - Devito provides another short-hand notation for this: u.forward. For the actual symoblic reordering, SymPy comes to the rescue with the solve utility that we can use to re-organise our equation. In [7]: from devito import solve stencil = solve(eq, u.forward) print(stencil) -1.0*dt*u(t, x, y)/h_y + 1.0*dt*u(t, x, y - h_y)/h_y - 1.0*dt*u(t, x, y)/h_x + 1.0*dt*u(t, x - h_x, y)/h_x + u(t, x, y) The careful reader will note that this is now the equivalent symbolic expression to the RHS term of the NumPy code we showed earlier - only with dx and dy denoted as h and dt denoted as s, while u(t, x, y), u(t, x - h, y) and u(t, x, y - h) denote the equivalent of $u^{n}_{i, j}$, $u^{n}_{i-1, j}$ and $u^{n}_{i, j-1}$ respectively. We can now use this stencil expression to create an operator to apply to our data object: In [8]: #NBVAL_IGNORE_OUTPUT from devito import Operator # Reset our initial condition in both buffers. # This is required to avoid 0s propagating into # our solution, which has a background value of 1. init_hat(field=u.data[0], dx=dx, dy=dy, value=2.) init_hat(field=u.data[1], dx=dx, dy=dy, value=2.) # Create an operator that updates the forward stencil point op = Operator(Eq(u.forward, stencil, subdomain=grid.interior)) # Apply the operator for a number of timesteps op(time=nt, dt=dt) plot_field(u.data[0, :, :]) # Some small sanity checks for the testing framework assert (u.data[0, 45:55, 45:55] > 1.8).all() assert np.allclose(u.data[0], u_ref, rtol=3.e-2) Operator Kernel run in 0.00 s Great, that looks to have done the same thing as the original NumPy example, so we seem to be doing something right, at least. A note on performance: During the code generation phase of the previous operatorm Devito has created some log output with information about two optimisation steps: The DSE (Devito Symbolics Engine) and the DLE (Devito Loop Engine). We can ignore these for now, because our exmaple is tiny - but for large runs where performance matters, these are the engines that make the Devito kernel run very fast in comparison to raw Python/NumPy. Now, despite getting a correct looking result, there is still one problem with the above operator: It doesn't set any boundary conditions as part of the time loop. We also note that the operator includes a time loop, but at this point Devito doesn;t actually provide any language constructs to explicitly define different types of boundary conditions (Devito is probably still a kind of prototype at this point). Luckily though, Devito provides a backdoor for us to insert custom expression in the so-called "indexed" or "low-level" API that allow us to encode the Dirichlet boundary condition of the original example. #### The "indexed" or low-level API¶ The TimeFunction field we created earlier behaves symbolically like a sympy.Function object with the appropriate indices, eg. u(t, x, y). If we take a simple first-order derivative of that we have a term that includes the spacing variable h, which Devito uses as the default for encoding $dx$ or $dy$. For example, u.dx simply expands to -u(t, x, y)/h + u(t, x + h, y)/h. Now, when the Operator creates explicit C-code from that expression, it at some point "lowers" that expression by resolving explicit data accesses (or indices) into our grid by transforming it into a sympy.Indexed object. During this process all occurences of h in data accesses get replaced with integers, so that the expression now looks like -u[t, x, y]/h + u[t, x + 1, y]/h. This is the "indexed" notation and we can create custom expression of the same kind by explicitly writing u[...], that is with indices in square-bracket notation. These custom expressions can then be injected into our operator like this: In [9]: #NBVAL_IGNORE_OUTPUT # Reset our data field and ICs in both buffers init_hat(field=u.data[0], dx=dx, dy=dy, value=2.) init_hat(field=u.data[1], dx=dx, dy=dy, value=2.) # For defining BCs, we want to explicitly set rows/columns in our field # We can use Devito's "indexed" notation to do this: x, y = grid.dimensions t = grid.stepping_dim bc_left = Eq(u[t + 1, 0, y], 1.) bc_right = Eq(u[t + 1, nx-1, y], 1.) bc_top = Eq(u[t + 1, x, ny-1], 1.) bc_bottom = Eq(u[t + 1, x, 0], 1.) # Now combine the BC expressions with the stencil to form operator expressions = [Eq(u.forward, stencil)] expressions += [bc_left, bc_right, bc_top, bc_bottom] op = Operator(expressions=expressions, dle=None, dse=None) # <-- Turn off performance optimisations op(time=nt, dt=dt) plot_field(u.data[0, :, :]) # Some small sanity checks for the testing framework assert (u.data[0, 45:55, 45:55] > 1.8).all() assert np.allclose(u.data[0], u_ref, rtol=3.e-2) Operator Kernel run in 0.00 s You might have noticed that we used the arguments dle=None and dse=None in the creation of the previous operator. This suppresses the various performance optimisation steps in the code-generation pipeline, which makes the auto-generated C code much easier to look at. So, for the brave, let's have a little peek under the hood... In [10]: print(op.ccode) #define _POSIX_C_SOURCE 200809L #include "stdlib.h" #include "math.h" #include "sys/time.h" struct dataobj { void *restrict data; int * size; int * npsize; int * dsize; int * hsize; int * hofs; int * oofs; } ; struct profiler { double section0; double section1; double section2; } ; int Kernel(const float dt, const float h_x, const float h_y, struct dataobj *restrict u_vec, const int time_M, const int time_m, struct profiler * timers, const int x_M, const int x_m, const int y_M, const int y_m) { float (*restrict u)[u_vec->size[1]][u_vec->size[2]] __attribute__ ((aligned (64))) = (float (*)[u_vec->size[1]][u_vec->size[2]]) u_vec->data; for (int time = time_m, t0 = (time)%(2), t1 = (time + 1)%(2); time <= time_M; time += 1, t0 = (time)%(2), t1 = (time + 1)%(2)) { struct timeval start_section0, end_section0; gettimeofday(&start_section0, NULL); for (int x = x_m; x <= x_M; x += 1) { for (int y = y_m; y <= y_M; y += 1) { u[t1][x + 1][y + 1] = 1.0F*dt*u[t0][x + 1][y]/h_y - 1.0F*dt*u[t0][x + 1][y + 1]/h_y + 1.0F*dt*u[t0][x][y + 1]/h_x - 1.0F*dt*u[t0][x + 1][y + 1]/h_x + u[t0][x + 1][y + 1]; } } gettimeofday(&end_section0, NULL); timers->section0 += (double)(end_section0.tv_sec-start_section0.tv_sec)+(double)(end_section0.tv_usec-start_section0.tv_usec)/1000000; struct timeval start_section1, end_section1; gettimeofday(&start_section1, NULL); for (int y = y_m; y <= y_M; y += 1) { u[t1][1][y + 1] = 1.00000000000000F; u[t1][81][y + 1] = 1.00000000000000F; } gettimeofday(&end_section1, NULL); timers->section1 += (double)(end_section1.tv_sec-start_section1.tv_sec)+(double)(end_section1.tv_usec-start_section1.tv_usec)/1000000; struct timeval start_section2, end_section2; gettimeofday(&start_section2, NULL); for (int x = x_m; x <= x_M; x += 1) { u[t1][x + 1][81] = 1.00000000000000F; u[t1][x + 1][1] = 1.00000000000000F; } gettimeofday(&end_section2, NULL); timers->section2 += (double)(end_section2.tv_sec-start_section2.tv_sec)+(double)(end_section2.tv_usec-start_section2.tv_usec)/1000000; } return 0; }
# Electric field along the y-axis of a charged semicircle 1. Apr 5, 2013 ### dumbperson 1. The problem statement, all variables and given/known data Find the electric field along the y-axis of a charged semicircle (with the center of the circle in the origin) with radius R, with a uniform charge distribution $$\lambda$$ . https://encrypted-tbn0.gstatic.com/...z9K2R-JDyhgQrU0CSl08tKghH-4LTTamtYNjY-w7FNfPA 2. Relevant equations $$\vec{E} = \frac{1}{4\pi \epsilon_0} \int \frac{dq}{|\vec{r}|^2} \hat{r}$$ $$k = \frac{1}{4\pi \epsilon_0}$$ (r is the vector pointing from a charge dq to your ''test charge'' on the y-axis) 3. The attempt at a solution $$dq= R\lambda d\theta$$ The positionvector of the charge dq $$=\vec{r_1} = R \cos(\theta)\hat{x} + R \sin(\theta)\hat{y}$$ The positionvector of your ''test charge'' along the y-axis : $$\vec{r_2}=y\hat{y}$$ so the vector $$\vec{r} = R \cos(\theta)\hat{x} + (R \sin(\theta)-y)\hat{y}$$ $$|\vec{r}|= \sqrt{R^2\cos(\theta)^2+R^2\sin(\theta)^2+y^2-2yR\sin(\theta))}=\sqrt{R^2+y^2-2yR\sin(\theta)}$$ so $$\hat{r}= \frac{\vec{r}}{|\vec{r}|} = \frac{ R \cos(\theta)\hat{x} + (R \sin(\theta)-y)\hat{y}}{\sqrt{R^2+y^2-2yR\sin(\theta)}}$$ So the integral becomes $$\vec{E} = kR\lambda \int_0^\pi \frac{ R \cos(\theta)\hat{x} + (R \sin(\theta)-y)\hat{y}}{(R^2+y^2-2yR\sin(\theta))^{\frac{3}{2}}} d\theta$$ So $$E_x = kR^2\lambda \int_0^\pi \frac{\cos(\theta)}{(R^2+y^2-2yR\sin(\theta))^{\frac{3}{2}}} d\theta$$ $$E_y = kR\lambda \int_0^\pi \frac{R\sin(\theta)-y}{(R^2+y^2-2yR\sin(\theta))^{\frac{3}{2}}} d\theta$$ Am I doing this correctly? I managed to solve the integral for E_x and the result is zero, as expected. But I have no clue how to solve E_y, is this integral set up correctly? Well I could solve it for y=0 pretty easily, but that is not the assignment. Thanks 2. Apr 5, 2013 ### ehild Check the sign of the electric field. Otherwise it looks correct. ehild 3. Apr 5, 2013 ### dumbperson Alright, thanks. Do you have any tips on how to solve the integral to get E_y? 4. Apr 5, 2013 ### ehild It can not be written in closed form, I am afraid. Writing up the potential is easier, but it also involves elliptic integral. ehild
# Problem with euler to quaternion rotation This topic is 1976 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Basically I've got a GUI where the user can input euler angle rotations in degrees for X, Y and Z (so Pitch, Yaw, Roll). Then I'm creating a quaternion based of these values with the directx function: XMQuaternionRotationRollPitchYaw(DegToRad(newRotation.x), DegToRad(newRotation.y), DegToRad(newRotation.z)); DegToRad is my own wrapper function that does: inline float DegToRad(const float& deg) { return float(deg * (PI / 180.0f)); } Now the problem I'm having is when inputting 180 degrees as yaw my resulting quaternion looks like this: {3,49459e-007, 0, 1, 0} (w, z, y, x order) In my application it still works but then I'm writing that out to a text file as a floating point value and after reading it again my rotation gets completely messed up. Which I assume is because of this w-component. Any ideas what's wrong? Is it just a floating point precision problem ? Edited by lipsryme ##### Share on other sites What is the value after you read it back in? Is it 3.49459 by any chance (which would suggest the read function isn't coping with the exponent part correctly)? What function are you using to read it back in? Note 3.49459e-007 is very close to zero (3.49459 * 10 ^ -7), you could just clamp it to zero using an epsilon before writing it out. ##### Share on other sites The vector it's writing out is actually (0, 1, 0, 0) (xyzw). So it seems like it's already clamping this to zero. Solved: Oh...it seems like there was some part of the code that did not read the w-component, so it was always 1. 1. 1 2. 2 3. 3 Rutin 18 4. 4 JoeJ 14 5. 5 • 14 • 10 • 23 • 9 • 43 • ### Forum Statistics • Total Topics 632635 • Total Posts 3007560 • ### Who's Online (See full list) There are no registered users currently online ×
## 13 Mar The Pi Day of our lives Download this post as PDF (will not include images and mathematical symbols). Pi Day Doodle by Google in 2010 As most of us know, March 14th every year is celebrated by mathematicians all over the world as $\pi$ Day. The reason for this is quite simple: March 14 is written as 3-14 in the American style of writing dates and as the value of $\pi$ starts with $3.14159265\cdots$ so this date seems to be a good one to celebrate the awesomeness of this ubiquitous mathematical constant. This year, the date is extra special as it will be 3-14-15 which takes the approximation one step further. And such a date occurs only once every century. So unless, you live for a really long time, this will be THE $\pi$ Day of your life. Enjoy it to the fullest, the best time to do so would be at 9:26 am. We have already written a lot about $\pi$ here, but in this short piece, we show some instances where $\pi$ has appeared in popular culture. And yes, a very happy $\pi$ day to everyone! 1. Palais de la Découverte is a museum in Paris where they have a special $\pi$ room, in which the digits upto 707th decimal places are recorded. The digits are large wooden characters attached to the dome-like ceiling. The digits were based on an 1853 calculation by English mathematician William Shanks, which included an error beginning at the 528th digit. The error was detected in 1946 and corrected in 1949. 2. During the 2011 auction for Nortel’s portfolio of valuable technology patents, Google made a series of unusually specific bids based on mathematical and scientific constants, including $\pi$. This was Google’s tribute to this nice constant. 3. In 1897, an amateur American mathematician attempted to persuade the Indiana legislature to pass the Indiana Pi Bill, which described a method to square the circle and contained text that implied various incorrect values for $\pi$, including 3.2. The bill is notorious as an attempt to establish a value of scientific constant by legislative fiat. The bill was passed by the Indiana House of Representatives, but rejected by the Senate. 4. In Carl Sagan’s novel Contact it is suggested that the creator of the universe buried a message deep within the digits of $\pi$. The digits have also been incorporated into the lyrics of the song “Pi” from the album Aerial by Kate Bush. 5. In 2010, on $\pi$ Day, Google made a special doodle. 6. In the movie ‘Taare Zameen Par’, there is a scene where numbers are shown flying by, but the approximation to $\pi$ is shown incorrectly. 7. In the television series, Elementary based on the character Sherlock Holmes, there is an episode where $\pi$ and its value is of major importance to the plot of the story. These are just seven instances where this famous irrational number has inspired some effect in the popular psyche, but for mathematicians and physicists the number is more than just a number. And before we forget, 14th March is also the birth anniversary of two great mathematicians, Albert Einstein and Waclaw Sierpinski. Featured Image Courtesy: Shutterstock , , ,
aligning to circular sequence 2 0 Entering edit mode 3.3 years ago yaximik • 0 In regard to 7 yrs old discussion about aligning linear to circular sequences - is anyone aware of aligners that can handle such alignment properly? For alignment purposes, for example, CLC-Bio Genomic Workbench does not seem to be aware if sequence is circular, although it can show rCRS mtDNA as circular. Yet it cannot properly align to it a linear sequence that is should align between pos. 16022-origin-1280 to circular rCRS. Nor can Geneious, for another example. alignment • 2.2k views 0 Entering edit mode 0 Entering edit mode Yes, I guess they are scratching pumpkins... I just wanted to know if anyone in the community is aware of other possibilities, like open source... 0 Entering edit mode Is it NGS data ? Because i did it on times on NGS data with bwa and to make this possible i just merge 2 references together .... 0 Entering edit mode No, this is just alignment of a contig previously assembled separately. CLC Bio developers confirmed that their Genomics Workbench aligner cannot handle circularity. As workaround they suggested either to use their Map to a Reference tool, which can handle circularity, or to align to a duplicated circular sequence. 0 Entering edit mode Please use ADD COMMENT/ADD REPLY when responding to existing posts to keep threads logically organized. BTW, have you looked at this: Aligning Circular Sequences (old thread but has ideas/tools). 0 Entering edit mode Have you considered Circlator? 1 Entering edit mode 2.1 years ago I know this is an old post - but I wanted to clarify that CLC Genomics Workbench does indeed support mapping reads to circular references. The issue is that (currently) our track viewer is only linear - something we plan to improve on in 2020 as a number of customers have requested CIRCOS like viewers (for a variety of use cases). Thus - you can indeed map (correctly) to a circular reference in CLC, and reads that align to the origin will be correctly mapped. But when you export them in SAM/BAM format (for example to visualize in a tool that supports circularized coverage maps) you run into issues with the SAM format not supporting circular references. We have a FAQ on this specific topic actually (mapping in CLC, exporting to BAM, and reimporting into CLC you'll see that there's a loss of information due to the SAM/BAM format not supporting these origin spanning reads correctly. however, if the original CLC data in the same view is correct). Link is here: https://secure.clcbio.com/helpspot/index.php?pg=kb.printer.friendly&id=11#p419 Happy to answer any other questions on this.
# Is this discrete-time system time-invariant? For a discrete-time system, the input signal is $x(n)$, the output signal is $y[n] = x[n+1] - x[1-n]$ I think it's time-invariant, but the solution's manual says it's not. My procedure is: $T\{x[n-k]\} = x[n - k + 1] - x[1 - (n - k)] = x[n - k + 1] - x[1 - n + k]$ $y[n-k] = x[n-k+1]-x[1-n+k]$ Am I wrong? • What is $T{}{}$? – Ian Oct 13 '17 at 10:56 • Are you sure the book does not rather ask about $y[n] = x[n+1] - x[n-1]$? – Did Mar 3 '18 at 11:49 The system you have given is $$y[n]=x[n+1]-x[1-n]=x[n+1]-x[-n+1]$$ You can see that the system is subtracting a shifted and reflected version of the input $x[n]$ from a shifted version of the same input. In particular, the second term shifts $x[n]$ one unit to the right and then reflects it about the y-axis. In this case, for the second term, the system further shifts $x[n-k]$ to the right so it becomes $x[n-k+1]$. It then flips it so it becomes $x[-n-k+1]$. This is not equivalent to the term you obtain when the output is directly shifted by $k$, given by $x[-(n-k)+1]=x[-n+k+1]$. The system is not time-invariant.
# How do you find the range of y = z^2 - 3z; given Domain = {-1, 0, 1, 2}? Sep 8, 2016 $\text{The Range=} \left\{4 , 0 , - 2\right\}$. #### Explanation: $y = {z}^{2} - 3 z \text{, and, let } D = \left\{- 1 , 0 , 1 , 2\right\}$. By, Range of the given function, we mean, the Set of all values of $y$, to be determined by using the values of $z$ choosing from the given Domain Set, here, $D$. Thus, in Notation, the Range$= \left\{y = {z}^{2} - 3 z : z \in D\right\}$. Now, $z = - 1 \Rightarrow y = {z}^{2} - 3 z = {\left(- 1\right)}^{2} - 3 \left(- 1\right) = 1 + 3 = 4$ $z = 0 \Rightarrow y = 0$ $z = 1 \Rightarrow y = - 2$ $z = 2 \Rightarrow y = - 2$ Therefore, $\text{The Range=} \left\{4 , 0 , - 2\right\}$.
# Combine two streams, but not for instancing (DX9) This topic is 2904 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hi, I have created a cilinder, 45 vertices per slice, 10 + 1 slices, on the Z-axis. The slices are interconnected with the use of an index buffer (so no double points). This gives me a tube. Now I want to "bend" this tube at runtime and preferably with a lot of GPU acceleration. So I thought using multiple stream sources would be ideal here: I'd create an extra vertex buffer (dynamic) of size 11 with each "vertex" containing a matrix (4 float4's) for a transformation to be done in the vertex shader. The buffer could be changed at every frame if I want to create a bending-animation. But that doesn't seem to work. Am I correct that the streamsources can only be used for instancing? I used to use it also for terrain generation where stream 0 was a grid of x-z coordinates and stream 1 was the y coordinate (and more). It was a 1 on 1 combination and it worked perfectly. But now I need the frequency to be that every 45 vertices of stream 0 are combined with the one vertex from stream 1. Can anyone tell me if this is possible, and if not if there is another way to do what I want to do? At this moment, I am trying to use a texture of format A32B32G32R32 (44 x 1) to store the matrices in and then tex2dlod-ding them at vertex shading time but at this moment the results are terrible. But anyway, using a texture for these kind of things always seem very "wrong" to me. It's so bad that I need to use DX9 because I have done far more difficult things in DX10. So, can anyone help me? ##### Share on other sites In general for this kind of skinning you can use the vertex shader constants to store the skinning matrices. The vertex data then will contain for each vertex the matrix indices that should be used and the weight. For a good quality people tend to use up to 4 matrices per vertex. The texture solution would work too but it’s not supported by every hardware. Therefore it would depend on your target hardware if you can go this way. ##### Share on other sites Quote: Original post by scippieAt this moment, I am trying to use a texture of format A32B32G32R32 (44 x 1) to store the matrices in and then tex2dlod-ding them at vertex shading time but at this moment the results are terrible. But anyway, using a texture for these kind of things always seem very "wrong" to me. There's really nothing "wrong" with using a texture for skinning. It's pretty simple, flexible, and plenty fast on hardware that has good support for vertex texture fetch (basically any DX10-capable hardware from Nvidia or ATI). In fact it can be faster than indexing into constant registers, since that has its own performance problems (constant waterfalling). Plus in DX9 you have a limited number of constant registers, which limits the number of bones that you can use. Oh and keep in mind that typically you don't need the full 4x4 matrix for skinning, since the last column is usually just [0 0 0 1]. So you can probably just pack 4x3 matrices. ##### Share on other sites Ok, thanks, you guys put me on track! I now seem to have some results with the constant matrix table. I don't know why it didn't work before as I had tried this approach. Now it sort of work. But when I try it with the texture it doesn't work at all, the matrices seem to be corrupt. Can someone tell me what's wrong here? ...m_pD3DDev->CreateTexture(11 * 4, 1, 1, D3DUSAGE_DYNAMIC, D3DFMT_A32B32G32R32F, D3DPOOL_DEFAULT, &m_txTubeTrans, 0);...D3DLOCKED_RECT lr;m_txTubeTrans->LockRect(0, &lr, 0, 0);for (int x = 0; x < 11; ++x){ D3DXMATRIX m; D3DXVECTOR3 p; D3DXVECTOR3 r; GetTrajectoryPoint(((float)x / 20.0f), p, r); // (x / 20) is just a temporary test here D3DXMatrixTranslation(&m, p.x, p.y, p.z); ((D3DXMATRIX*)lr.pBits)[x] = m;}m_txTubeTrans->UnlockRect(0);...m_fxTube->SetTexture("texTrans", m_txTubeTrans);... float4x4 mWorldViewProj;texture texTrans;sampler smpTrans = sampler_state { Texture = texTrans; MinFilter = POINT; MagFilter = POINT; MipFilter = POINT;};struct VS_IN{ float3 pos : POSITION; float2 tex : TEXCOORD; float tri : BLENDINDICES;};PS_IN vs(VS_IN i){ PS_IN o; float f = 1.0f / 44.0f; float4x4 mT = float4x4( tex2Dlod(smpTrans, float4(i.tri + 0 * f, 0, 0, 0)), tex2Dlod(smpTrans, float4(i.tri + 1 * f, 0, 0, 0)), tex2Dlod(smpTrans, float4(i.tri + 2 * f, 0, 0, 0)), tex2Dlod(smpTrans, float4(i.tri + 3 * f, 0, 0, 0)) ); o.pos = mul(mul(mT, mWorldViewProj), float4(i.pos.xyz, 1)); return o;}... The results are totally corrupt while with the same approach with a constant table, like this: float4x4 mWorldViewProj;float4x4 mSkin[11];struct VS_IN{ float3 pos : POSITION; float2 tex : TEXCOORD; float tri : BLENDINDICES;};PS_IN vs(VS_IN i){ PS_IN o; float4x4 mT = mul(mSkin[(int)i.tri], mWorldViewProj); o.pos = mul(mT, float4(i.pos.xyz, 1)); return o;} with the correct C++ code with it of course, it almost works, it's not corrupt. (can anyone tell me why I need to do the cast to (int) on the i.tri? If I don't only the right half of the tube is correct, the left half is flat. What am I doing wrong in my texture lookup version? And then, I'm saying it almost works, but it doesn't do what I expect, it doesn't seem to deform at all. If I put something that should really do something inside the matrices instead of the GetTrajectoryPoint function I used, I still see no deformation at all. I really put a different value in the vertex for the tri-value. To make sure, here's the code I use to generate the tube: struct vertex { float x, y, z; float tx, ty; float tr;};std::vector<vertex> vTube;std::vector<WORD> iTube;for (int x = 0; x <= 10; ++x){ for (int a = 0; a <= m_nGeometrySteps; ++a) { float s = sin((float)D3DX_PI / ((float)m_nGeometrySteps / 2.0f) * (float)a); float c = cos((float)D3DX_PI / ((float)m_nGeometrySteps / 2.0f) * (float)a); vertex vv; vv.x = c; vv.y = s; vv.z = (float)x / 10.0f; vv.tr = (float)x; vTube.push_back(vv); } if (x < 10) { for (int a = 0; a <= m_nGeometrySteps; ++a) { iTube.push_back(x * (m_nGeometrySteps + 1) + a); iTube.push_back((x + 1) * (m_nGeometrySteps + 1) + a); } }}... And then I put it in a vertex buffer and index buffer. This is my vertex declaration: D3DVERTEXELEMENT9 decl[] = { {0, 0, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 0 }, {0, 0, D3DDECLTYPE_FLOAT2, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 0 }, {0, 0, D3DDECLTYPE_FLOAT1, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_BLENDINDICES, 0 }, D3DDECL_END() }; Lots of questions... anyone? ##### Share on other sites Quote: Original post by MJPOh and keep in mind that typically you don't need the full 4x4 matrix for skinning, since the last column is usually just [0 0 0 1]. So you can probably just pack 4x3 matrices. I will surely implement this when I get things working but I don't want to add extra possible errors. ##### Share on other sites And still, can anyone tell my why it doesn't work with two streams? Are two streams only for instancing? ##### Share on other sites Multiple vertex streams are not just for instancing. Check out this old thread, still applicable with DX9: http://www.gamedev.net/community/forums/topic.asp?topic_id=572675&whichpage=1� 1. 1 2. 2 3. 3 4. 4 Rutin 18 5. 5 • 12 • 12 • 9 • 12 • 37 • ### Forum Statistics • Total Topics 631421 • Total Posts 2999997 ×
# Angular momentum and impulse Tags: 1. Aug 21, 2015 ### 24forChromium Hopefully the image is self-explanatory, if not: A cylinder is rotating around its central axis with angular momentum L1; an angular impulse, ΔL is then added to the cylinder, perpendicularly with respect to L1. The hypothetical result is: the cylinder has one angular mometum at the end, L-result. L-result is equivalent in magnitude to L1, and its direction is shifted counter-clockwise by a certain ammount so that a line parallel to L1 draw from the tip of L-result will meet the tip of ΔL, mathemetically, the angle has increased in the counter-clockwise direction by (90-arccos(ΔL / L1)) degrees. (<-- that stuff is got from basic trignometry) Is this true? If it is false, what is the right way to do it? will the cylinder "follow" the angular momentum arrow and also point in the new direction, or will it remain in place and then rotate around the new arrow in a weird fashion? 2. Aug 22, 2015 ### Staff: Mentor The vectors add, so the resulting angular momentum will have a larger magnitude. Neither. It will precess. 3. Aug 22, 2015 ### 24forChromium I heard from various sources that if the two angular momentum vectors are perpendicular to one another, they will only change the direction of the final angular mometum. I understand, albeit to a shallow extent, precession, and I think precession only occurs when a torque is continuously exerted on a wheel. Also, it would be the most helpful of all if you can give the final angular momentum expressed in terms of L1 and ΔL. 4. Aug 22, 2015 ### A.T. 5. Aug 23, 2015 ### Staff: Mentor As vector: L = L1 + ΔL This is conservation of angular momentum. As magnitude: $|L|=\sqrt{L1^2 + ΔL^2}$ - using the right angle between the two components. See the section "Torque-free" in the wikipedia article.
# Liouville's theorem for subharmonic functions Liouville's theorem states that a bounded entire function must be constant. A generalization of this is the Liouville theorem for subharmonic functions in the plane. I am looking for a proof of this assertion. - This article gives a proof of the following version of Liouvilles theorem: Liouville Theorem: If $U:\mathbb{R}^2\backslash {0} \rightarrow\mathbb{R}$ is a nonconstant harmonic function, then $$\liminf(M(r)/|\log r|)>0$$ either as $r\rightarrow 0$ or as $r\rightarrow \infty$, where for $r>0$ $M(r)=\max \{ U(x): \|x\|=r\}.$ -
# Famous quotations in their original first-order language Historians everywhere are shocked by the recent discovery that many of our greatest thinkers and poets had first expressed their thoughts and ideas in the language of first-order predicate logic, and sometimes modal logic, rather than in natural language. Some early indications of this were revealed in the pioneering historical research of Henle, Garfield and Tymoczko, in their work Sweet Reason: We now know that the phenomenon is widespread!  As shown below, virtually all of our cultural leaders have first expressed themselves in the language of first-order predicate logic, before having been compromised by translations into the vernacular. $\neg\lozenge\neg\exists s\ G(i,s)$ $(\exists x\ x=i)\vee\neg(\exists x\ x=i)$ $\left(\strut\neg\exists t\ \exists d\ \strut D(d)\wedge F(d)\wedge S_t(i,d)\right)\wedge\left(\strut\neg\exists t\ w\in_t \text{Ro}\right)\wedge\left(\strut \text{Ru}(i,y)\to \lozenge\text{C}(y,i,qb)\wedge \text{Ru}(i)\wedge\text{Ru}(i)\wedge\text{Ru}(i)\wedge\text{Ru}(i)\right)$ $\exists!t\ T(t) \wedge \forall t\ (T(t)\to G(t))$ $\neg B_i \exists g\ G(g)$ $\forall b\ \left(\strut G(b)\wedge B(b)\to \exists x\ (D(b,x)\wedge F(x))\right)$ $(\exists!w\ W_1(w)\wedge W_2(w)), \ \ \exists w\ W_1(w)\wedge W_2(w)\wedge S(y,w)$? $\exists s\ Y(s)\wedge S(s)\wedge \forall x\ L(x,s)$ $\exists p\ \left[\forall c\ (c\neq p\to G(c))\right]\wedge\neg G(p)$ $\exists l\ \left[L(l)\wedge \Box_l\left({}^\ulcorner\,\forall g\ \text{Gl}(g)\to \text{Gd}(g){}^\urcorner\right)\wedge\exists s\ \left(SH(s)\wedge B(l,s)\right)\right]$ $(\forall p\in P\ \exists c\in\text{Ch}\ c\in p)\wedge(\forall g\in G\ \exists c\in\text{Cr}\ c\in g)$ $\forall x (F(w,x)\leftrightarrow x=F)$ $B\wedge \forall x\ \left[S(x)\wedge T(x)\to \exists!w\ W(w)\wedge\text{Gy}(x,w)\wedge\text{Gi}(x,w)\right]$ $\exists!x\ D(x)\wedge D(\ {}^\ulcorner D(i){}^\urcorner\ )$ $\forall f\ \forall g\ \left(\strut H(f)\wedge H(g)\to f\sim g\right)\wedge\forall f\ \forall g\ \left(\strut\neg H(f)\wedge \neg H(g)\to \neg\ f\sim g\right)$ $\exists w\ \left(\strut O(w)\wedge W(w)\wedge\exists s\ (S(s)\wedge L(w,s))\right)$ $C(i)\to \exists x\ x=i$ $\neg\neg\left(\strut H(y)\wedge D(y)\right)$ $\neg (d\in K)\wedge\neg (t\in K)$ $W(i,y)\wedge N(i,y)\wedge\neg\neg\lozenge L(i,y)\wedge \left(\strut \neg\ \frac23<0\to\neg S(y)\right)$ $\lozenge \text{CL}(i)\wedge\lozenge C(i)\wedge \lozenge (\exists x\ x=i)\wedge B(i)$ $\forall x\ K_x({}^\ulcorner \forall m\ \left[M(m)\wedge S(m)\wedge F(m)\to\Box\ \exists w\ M(m,w)\right]{}^\urcorner)$ $\forall e\forall h\ \left(\strut G(e)\wedge E(e)\wedge H(h)\to \neg L(i,e,h)\right)$ $\forall p\ \Box\text{St}(p)$ $\lozenge^w_i\ \forall g\in G\ \lozenge (g\in C)$ $\forall m\ (a\leq_C m)$ $\forall t\ (p\geq t)\wedge \forall t\ (p\leq t)$ $\forall x\ (F(x)\iff x=h)$ $(\forall x\ \forall y\ x=y)\wedge(\exists x\ \exists y ([\![x=x]\!]>[\![y=y]\!]))$ $\forall p\ \left(\strut\neg W(p)\to \neg S(p)\right)$ $\exists x\Box\Box x\wedge \exists x\Box\neg\Box x\wedge\exists x\neg\Box\neg\Box x$ $\forall p \left(\strut E(p)\to \forall h\in H\ A(p,h)\right)$ Dear readers, in order to assist with this important historical work, please provide translations into ordinary English in the comment section below of any or all of the assertions listed above. We are interested to make sure that all our assertions and translations are accurate. In addition, any readers who have any knowledge of additional instances of famous quotations that were actually first made in the language of first-order predicate logic (or similar) are encouraged to post comments below detailing their knowledge. I will endeavor to add such additional examples to the list. Thanks to Philip Welch, to my brother Jonathan, and to Ali Sadegh Daghighi (in the comments) for providing some of the examples, and to Timothy Gowers for some improvements. ## 29 thoughts on “Famous quotations in their original first-order language” • Can you give us a hint with a little more context? • I’ve taken the Rolling Stones first-order translation that you have above and fitted it to your question that you had at the end. Here t = translations. • Ah, very good! Congratulations! (But I’ve added several more $t$ in the meantime…) 1. Nice. I’ve done a few of them, and have a small quibble about one (the sixth from the end). I think the last part shouldn’t be $\exists w\ G(w)\wedge W(m,w)$. That’s too romantic — even Platonic. However, I can’t work out how to express the real statement in first-order logic. • I’m glad you tried them! And yes, I agree that that one could be improved. Perhaps “in want of” is a modal assertion, like “it is necessary that”, and so the quotation should state that it is necessary that there exists a w with M(m,w). • I’ve changed it now to a modal assertion, which I think is better. 2. I liked Orwell’s “Some animals are more equal than others” (although it seems to be missing the first part that all animals are equal). • (Also, I believe the entire witch scene in Monty Python and the Holy Grail was, and still is, essentially a first-order proof of why being equivalent to duck implies being a witch.) • I have now added the first part. Thanks! 3. Thanks for driving attentions to this interesting project. I think one of the above formal quotes, $\exists x\exists y (x=x>y=y)$, is trying to formalize George Orwell’s famous sentence in his Animal Farm (chapter 10): ALL ANIMALS ARE EQUAL, BUT SOME ANIMALS ARE MORE EQUAL THAN OTHERS. But I’m not sure whether the part “more equal than” is formalized properly. As the notion of “being more equal than another thing while all two things are equal” itself is paradoxical (and so a bit hard to comprehend at the first moment), I guess the above formal quote in the presented form is a bit ambiguous. ————————- About Dante’s quote, it is clearly referring to “Abandon All Hope Ye who Enter Here!” (Inferno: Canto III), however the translation of the formal quote (as I, a non-native English speaker, do it) is something like the following: EVERYBODY WHO ENTERS HERE ABANDONS ALL HOPES. Again not sure that the two formal and real quotations are completely same but in this case I think the translation to formal language is done pretty well. ————————- Anyway I think the way ordinary people think and convince each other in their real daily life is not 100% valid according to the formal logic rules. That is why logical fallacies are such useful tools for marketers, politicians, lawyers, etc., to induce people to do what they want even when it is not the logical consequence of the presented facts necessarily! I think human literature is not an exception and looking to the beautiful and seemingly convincing artistic reasoning in most well-known texts of human literature (and also holy texts) will reveal the fact that they are not as strong justifications as they seem when one removes all of their artistic decoration via formalizing them into predicate logic. What remains from them is often a barely meaningful fallacy or paradox. Possibly that is why they look such fascinating because humans masochistically like to suffer their minds by thinking about paradoxes! • I guess Leo Tolstoy’s quote is this brilliant one from Anna Karenina: “All happy families are alike; each unhappy family is unhappy in its own way.” (The train scenes in the beginning and end of the story are unforgettable). You might hear this in the following form as a saying too: “All good/sane people are alike; each bad/insane person is bad/insane in its own way.” ———————— Dear Joel, following one of our discussions in mail on Victor Hugo’s Les Misérables, let me also add a favorite quote of mine from this masterwork together with its formalization, however I think the formalized form can’t reflect the true meaning behind this impressive short quote which itself is a kind of abstract for the entire novel! THOSE WHO DO NOT WEEP, DO NOT SEE. $\forall p (\neg W(p)\rightarrow \neg S(p))$ By the way, am I the only person who thinks the translation of logical equivalent form of the above quote, namely $\forall p (S(p)\rightarrow W(p))$, to English is less-impressive than its original “negative form” while in principle they are talking about the same concept?! I think it goes back to some linguistic and psychological characteristics of humans who prefer some forms to another in a set of logically equivalent sentences. • Thanks, Ali, for your examples. I added the Hugo quote, which I like very much! • You are welcome! Let me also suggest a challenging quote for formalizing from far eastern literature. It appeared in the opening of Lao Tzu’s famous book “Tao Te Ching” in which he explains the foundations of Taoism and the notion of “Tao” which literally means “way”, “path”, or “principle” of nature that all humans should live in harmony with, otherwise the result will be nothing but their suffering and sorrow: Original text: 道可道非常道 – 名可名非常名 Transliteration: “dào kĕ dào fēi cháng dào – míng kĕ míng fēi cháng míng” Translation: “The Tao that can be told is not the eternal Tao – The name that can be named is not the eternal name.” http://www.bopsecrets.org/gateway/passages/tao-te-ching.htm Interestingly Kanamori refers to this quote in the appendix of his book, Higher Infinite! (see page 482). For further reading on foundations of Taoism (and other religions of India and China), I recommend the following book which I found very well-written: http://www.amazon.com/Mans-Religions-John-B-Noss/dp/0023884703 • I think that would also make sense, but I was thinking that $x\leq_C y$ should mean that $x$ is crueller than $y$, since one prefers to think of cruel things as being nearer the bottom. Clearly one can formalize the relation either way. 4. Rom 3:23: $\forall x (S(x)\wedge F(x))$ which is equivalent to saying that $(\forall x S(x))\wedge(\forall x F(x))$. This example is useful to illustrating the equivalent identity $\exists(a\vee b)=(\exists a)\vee(\exists b)$ in the variety of all monadic algebras. • Thanks for the example! 5. Elvis Presley I think it says “you are nothing but a hound dog” Rene Descartes “I think therefore I am” By the way this web page is pure genius. • Thanks very much! I think you’ve got those two. Any others? 6. Wizard of OZ Dorothy talking to Toto “We are not in Kansas anymore” Jane Austin “It is a truth universally acknowledged ……” 7. This might be worth translating back to first-order: Titus 1 (New International Version) 12 – One of Crete’s own prophets has said it: “Cretans are always liars, evil brutes, lazy gluttons.” 13 – This saying is true. 8. May I ask what the “C” stands for in Descartes’ quote “I think; therefore, I am.”? • It stands for “cogito”, which is Latin for the verb “to think” (like the English words cognition, cognate, cogent, cogitate). The famous quotation is “Cogito ergo sum,” and you can read more at https://en.wikipedia.org/wiki/Cogito_ergo_sum. • Thank you so much! • You’ve got it! That is the opening line of Peter Pan.
## TFindFile and WaitFor Please post bug reports, feature requests, or any question regarding the DELPHI AREA projects here. ### TFindFile and WaitFor First, thanks for this awesome component!!! I'm trying to perform several different task using one FindFile component. This FindFile created run-time. Let's see this execute pseudocode: Code: Select all for i := Low(Task) to High(Task) do begin  FindFile.Threaded := True; { must }  FindFile.Criteria...  FindFile.OnFileMatch := ...  ...  FindFile.Execute;end; Since the FindFile are in threaded mode, so this "for" routine is not working properly. Is there any clue about WaitFor method that can be used so this kind of loop routine able to do its job, and without having to make the main thread (main form) hang? Kimbo Member Posts: 1 Joined: August 28th, 2012, 3:44 am ### Re: TFindFile and WaitFor WaitFor blocks the calling thread anyway. Instead of using a single instance of TFindFile to run the tasks one by one, I do suggest to create a new threaded instance of TFindFile for each task, so you can do all tasks simultaneously. Kambiz Kambiz
# Prove $\|T\| = \sup_{\|x\| < 1} \|Tx\|$ Let $$X, Y$$ be Banach spaces. And $$T \in B(X\rightarrow Y)$$. Prove that $$\|T\| = \sup_{\|x\| < 1} \|Tx\|$$ ## Discussion Having trouble seeing how to handle some of these ideas below. Please let me know if there's a simpler way or if I'm on the right track. ## Attempt Since $$X, Y$$ are Banach they are normed spaces. Since $$T$$ is a bounded linear operator (I think that's what that notation means) in a normed space, it is continuous. Recall that by definition of a norm on an operator, the norm of $$T$$ is $$\|T\| = \sup_{x \in X} \frac{\|Tx\|}{\|x\|}, \quad x\neq 0 \tag{\star}$$ Now, let $$(x_n)$$ be Cauchy in $$X$$. Since $$X$$ is Banach, $$(x_n)$$ is convergent to some $$x \in X$$. Note that $$\|x_m - x_n\|< \epsilon\,$$ for all $$m,n > N(\epsilon)$$ implies $$\left| \|x_m\| - \|x_n\| \right| < \epsilon$$ and therefore \begin{align} \frac{\|x_n\|}{\|x_m\|} \longrightarrow1 \quad &\text{as} \quad n,m \longrightarrow +\infty \\ \text{and} \\ \bigg\|\frac{x_n}{\|x_m\|}\bigg\| = \|z_{nm}\| \longrightarrow 1 \quad &\text{as} \quad n,m \longrightarrow + \infty \tag{1} \end{align} But to make our argument align with the proposition, recall that every convergent sequence of real numbers has a monotone subsequence that converges; $$(\|z_{mn}\|)$$ is such a sequence. So take $$(x_{n_k})$$ to be a subsequence of $$(x_n)$$, where $$n_k$$ correspond to the indices that generate a monotone subsequence of $$(\|z_{mn}\|)$$. Then for all $$n_k > N(\epsilon$$), we have that $$(1)$$ holds and we can choose either $$m = n_k, n = n_{k+1}$$ or vice versa depending on whether the sequence defined by $$\|x_{m =n_k}\|$$ is decreasing or increasing in order to keep $$\|z_{mn}\| < 1$$. Then by $$(\star)$$ \begin{align} \|T\| = \sup_{x \in X} \frac{\|Tx\|}{\|x\|} &= \sup_{x \in X} \frac{\|T\left(\lim_{n\to\infty}x_n\right)\|}{\|\lim_{m\to\infty}x_m\|} \\ \\ &= \sup_{x \in X} \frac{\lim_{n\to\infty}\|Tx_n\|}{\lim_{m\to\infty}\|x_m\|} \tag{2} \\ \\ &= \sup_{x \in X} \lim_{n,m\to\infty} \bigg\|T\left(\frac{x_n}{\|x_m\|}\right)\bigg\| \tag{3} \\ \\ &= \sup_{\|z_{mn}\|< 1} \lim_{n,m\to\infty} \|Tz_{mn}\| \end{align} where $$(2)$$ is justified by continuity of the norm and continuity of $$T$$, and $$(3)$$ is justified by linearity of the norm. But now I don't know how to finish it off (if it's even right). Also, the proof just feels bad to me. Thanks for the help! • Your formula is not true in $X = \{0\}$. – gerw Mar 20 at 6:48 $$\|Tx\| \leq \|T\|\|x\| \leq \|T\|$$ if $$\|x\|<1$$. Thus RHS $$\leq$$ LHS. For the other way take any $$x \neq 0$$ and consider $$y=\frac x {(1+\epsilon) ||x\|}$$. Then $$\|y\| <1$$ so $$\|Ty\| \leq \,$$ RHS. Taking sup over $$x$$ we get $$\|T\| \leq (1+\epsilon) \times$$ RHS. Let $$\epsilon \to 0$$.
# Managing Tail Latencies in Large Scale IR Systems ### Abstract With the growing popularity of the world-wide-web and the increasing accessibility of smart devices, data is being generated at a faster rate than ever before. This presents scalability challenges to web-scale search systems – how can we efficiently index, store and retrieve such a vast amount of data? A large amount of prior research has attempted to address many facets of this question, with the invention of a range of efficient index storage and retrieval frameworks that are able to efficiently answer most queries. However, the current literature generally focuses on improving the mean or median query processing time in a given system. In the proposed PhD project, we focus on improving the efficiency of high percentile tail latencies in large scale IR systems while minimising end-to-end effectiveness loss. Although there is a wealth of prior research involving improving the efficiency of large scale IR systems, the most relevant prior work involves predicting long-running queries and processing them in various ways to avoid large query processing times. Prediction is often done through pre-trained models based on both static and dynamic features from queries and documents. Many different approaches to reducing the processing time of long running queries have been proposed, including parallelising queries that are predicted to run slowly, scheduling queries based on their predicted run time, and selecting or modifying the query processing algorithm depending on the load of the system. Considering the specific focus on tail latencies in large-scale IR systems, the proposed research aims to: (i) study what causes large tail latencies to occur in large-scale web search systems, (ii) propose a framework to mitigate tail latencies in multi-stage retrieval through the prediction of a vast range of query-specific efficiency parameters, (iii) experiment with mixed-mode query semantics to provide efficient and effective querying to reduce tail latencies, and (iv) propose a time-bounded solution for Document-at-a-Time (DaaT) query processing which is suitable for current web search systems. As a preliminary study, Crane et al. compared some state-of-the-art query processing strategies across many modern collections. They found that although modern DaaT dynamic pruning strategies are very efficient for ranked disjunctive processing, they have a much larger variance in processing times than Score-at-a-Time (SaaT) strategies which have a similar efficiency profile regardless of query length or the size of the required result set. Furthermore, Mackenzie et al. explored the efficiency trade-offs for paragraph retrieval in a multi-stage question answering system. They found that DaaT dynamic pruning strategies could efficiently retrieve the top-1,000 candidate paragraphs for very long queries. Extending on prior work, Mackenzie et al. showed how a range of per-query efficiency settings can be accurately predicted such that $99.99$ percent of queries are serviced in less than $200$ ms without noticeable effectiveness loss. In addition, a reference list framework was used for training models such that no relevance judgements or annotations were required. Future work will focus on improving the candidate generation stage in large-scale multi-stage retrieval systems. This will include further exploration of index layouts, traversal strategies, and query rewriting, with the aim of improving early stage efficiency to reduce the system tail latency, while potentially improving end-to-end effectiveness. Type Publication Proceedings of the 40th International ACM Conference on Research and Development in Information Retrieval (SIGIR 2017) Date
# How do you write log0.001=x in exponential form? Oct 26, 2015 If we're using base $10$, and I assume we are, $\log$ $\left(0.001\right)$ $=$ $- 3$. Why? #### Explanation: When we write ${\log}_{a} b = c$, we are asking to what power we raise the base $a$ to get $c$. Here, ${a}^{c} = b$. Here the base is $10$. So in fact we are asking to what power we raise $10$ to get $0.001$. This is clearly $- 3$, because ${10}^{-} 3$ $=$ $\frac{1}{10} ^ 3$ $=$ $\frac{1}{1000}$ $=$ $0.001$. Apologies if have not grasped your question. Sep 24, 2016 ${\log}_{10} 0.001 = x \text{ "hArr " } {10}^{x} = 0.001$ #### Explanation: Log form and exponential form are two ways to say the same thing. They are interchangeable.. ${\log}_{a} b = c \text{ "hArr" } {a}^{c} = b$ In this case we have ${\log}_{10} 0.001 = x \text{ "hArr " } {10}^{x} = 0.001$ The question being asked in each form, is .... "To what power must 10 be raised to give 0.001?" To change from one to the other, remember.. "The base stays the base and the other two change around" The answer for x can be found from: ${10}^{x} = 0.001 = 1 \times {10}^{-} 3$ $x = - 3$ ${\log}_{10} 0.001 = - 3 \text{ " or " } {10}^{-} 3 = 0.001$
## OG Test 4 - Reading 1 Question 1 A musician has a new song available for downloading or streaming. The musician earns $0.09 each time the song is downloaded and$0.002 each time the song is streamed. Which of the following expressions represents the amount, in dollars, that the musician earns if the song is downloaded d times and streamed s times? • A 0.002d + 0.09s • B 0.002d − 0.09s • C 0.09d + 0.002s • D 0.09d − 0.002s Question 2 A quality control manager at a factory selects 7 lightbulbs at random for inspection out of every 400 lightbulbs produced. At this rate, how many lightbulbs will be inspected if the factory produces 20,000 lightbulbs? • A 300 • B 350 • C 400 • D 450 Question 3 $l= 24 + 3.5m$ One end of a spring is attached to a ceiling. When an object of mass m kilograms is attached to the other end of the spring, the spring stretches to a length of l centimeters as shown in the equation above. What is m when l is 73 ? • A 14 • B 27.7 • C 73 • D 279.5 Questions 4-5 are based on the following passage. The amount of money a performer earns is directly proportional to the number of people attending the performance. The performer earns $120 at a performance where 8 people attend. Question 4 How much money will the performer earn when 20 people attend a performance? • A$960 • B $480 • C$300 • D $240 Question 5 The performer uses 43% of the money earned to pay the costs involved in putting on each performance. The rest of the money earned is the performers profit. What is the profit the performer makes at a performance where 8 people attend? • A$51.60 • B $57.00 • C$68.40 • D $77.00 Question 6 When 4 times the number x is added to 12, the result is 8. What number results when 2 times x is added to 7 ? • A -1 • B 5 • C 8 • D 9 Question 7 $y=x^2−6x+8$ The equation above represents a parabola in the xy-plane. Which of the following equivalent forms of the equation displays the x-intercepts of the parabola as constants or coefficients? • A $y − 8 = x^2 − 6x$ • B $y + 1 = (x − 3)^2$ • C $y = x(x − 6) + 8$ • D $y = (x − 2)(x − 4)$ Question 8 In a video game, each player starts the game with k points and loses 2 points each time a task is not completed. If a player who gains no additional points and fails to complete 100 tasks has a score of 200 points, what is the value of k ? • A 0 • B 150 • C 250 • D 400 Question 9 A worker uses a forklift to move boxes that weigh either 40 pounds or 65 pounds each. Let x be the number of 40-pound boxes and y be the number of 65-pound boxes. The forklift can carry up to either 45 boxes or a weight of 2,400 pounds. Which of the following systems of inequalities represents this relationship? • A ⎧$40x+65y\leq2,400$ ⎩$x+y\leq45$ • B ⎧$\frac{x}{40}+\frac{y}{65}\le2400$ ⎩ $x+y\le45$ • C ⎧$40x+65y\le45$ ⎩$x+y\le2400$ • D ⎧$x+y\le2400$ ⎩$40x+65y\le2400$ Question 10 A function $f$ satisfies $f(2)=3$ and $f(3)=5$. A function $g$ satisfies $g(3)=2$ and $g(5)=6$. What is the value of $f(g(3))$ ? • A 2 • B 3 • C 5 • D 6 Question 11 Tony is planning to read a novel. The table above shows information about the novel, Tonys reading speed, and the amount of time he plans to spend reading the novel each day. If Tony reads at the rates given in the table, which of the following is closest to the number of days it would take Tony to read the entire novel? • A 6 • B 8 • C 23 • D 324 Question 12 On January 1, 2000, there were 175,000 tons of trash in a landfill that had a capacity of 325,000 tons. Each year since then, the amount of trash in the landfill increased by 7,500 tons. If y represents the time, in years, after January 1, 2000, which of the following inequalities describes the set of years where the landfill is at or above capacity? • A 325,000 − 7,500 ≤ y • B 325,000 ≤ 7,500y • C 150,000 ≥ 7,500y • D 175,000 + 7,500y ≥ 325,000 Question 13 A researcher conducted a survey to determine whether people in a certain large town prefer watching sports on television to attending the sporting event. The researcher asked 117 people who visited a local restaurant on a Saturday, and 7 people refused to respond. Which of the following factors makes it least likely that a reliable conclusion can be drawn about the sports-watching preferences of all people in the town? • A Sample size • B Population size • C The number of people who refused to respond • D Where the survey was given Question 14 According to the line of best fit in the scatterplot above, which of the following best approximates the year in which the number of miles traveled by air passengers in Country X was estimated to be 550 billion? • A 1997 • B 2000 • C 2003 • D 2008 Question 15 The distance traveled by Earth in one orbit around the Sun is about 580,000,000 miles. Earth makes one complete orbit around the Sun in one year. Of the following, which is closest to the average speed of Earth, in miles per hour, as it orbits the Sun? • A 66,000 • B 93,000 • C 210,000 • D 420,000 Question 16 The table above summarizes the results of 200 law school graduates who took the bar exam. If one of the surveyed graduates who passed the bar exam is chosen at random for an interview, what is the probability that the person chosen did not take the review course? • A $\frac{18}{25}$ • B $\frac{7}{25}$ • C $\frac{25}{200}$ • D $\frac{7}{200}$ Question 17 The atomic weight of an unknown element, in atomic mass units (amu), is approximately 20% less than that of calcium. The atomic weight of calcium is 40 amu. Which of the following best approximates the atomic weight, in amu, of the unknown element? • A 8 • B 20 • C 32 • D 48 Question 18 A survey was taken of the value of homes in a county, and it was found that the mean home value was$165,000 and the median home value was $125,000. Which of the following situations could explain the difference between the mean and median home values in the county? • A The homes have values that are close to each other. • B There are a few homes that are valued much less than the rest. • C There are a few homes that are valued much more than the rest. • D Many of the homes have values between$125,000 and $165,000. Questions 19-20 are based on the following passage. A sociologist chose 300 students at random from each of two schools and asked each student how many siblings he or she has. The results are shown in the table below. There are a total of 2,400 students at Lincoln School and 3,300 students at Washington School. Question 19 There are a total of 2,400 students at Lincoln School and 3,300 students at Washington School. What is the median number of siblings for all the students surveyed? • A 0 • B 1 • C 2 • D 3 Question 20 Based on the survey data, which of the following most accurately compares the expected total number of students with 4 siblings at the two schools? • A The total number of students with 4 siblings is expected to be equal at the two schools. • B The total number of students with 4 siblings at Lincoln School is expected to be 30 more than at Washington School. • C The total number of students with 4 siblings at Washington School is expected to be 30 more than at Lincoln School. • D The total number of students with 4 siblings at Washington School is expected to be 900 more than at Lincoln School. Question 21 A project manager estimates that a project will take x hours to complete, where x > 100. The goal is for the estimate to be within 10 hours of the time it will actually take to complete the project. If the manager meets the goal and it takes y hours to complete the project, which of the following inequalities represents the relationship between the estimated time and the actual completion time? • A x + y < 10 • B y > x + 10 • C y < x − 10 • D −10 < y − x < 10 Questions 22-23 are based on the following passage. $I=\frac{p}{4{\pi}r^2}$ At a large distance r from a radio antenna, the intensity of the radio signal I is related to the power of the signal P by the formula above. Question 22 Which of the following expresses the square of the distance from the radio antenna in terms of the intensity of the radio signal and the power of the signal? • A $r^2=\frac{IP}{4{\pi}}$ • B $r^2=\frac{P}{4{\pi}I}$ • C $r^2=\frac{4{\pi}I}{P}$ • D $r^2=\frac{I}{4{\pi}P}$ Question 23 For the same signal emitted by a radio antenna, Observer A measures its intensity to be 16 times the intensity measured by Observer B. The distance of Observer A from the radio antenna is what fraction of the distance of Observer B from the radio antenna? • A $\frac{1}{4}$ • B $\frac{1}{16}$ • C $\frac{1}{64}$ • D $\frac{1}{256}$ Question 24 $x^2+y^2+4x−2y=−1$ The equation of a circle in the xy-plane is shown above. What is the radius of the circle? • A 2 • B 3 • C 4 • D 9 Question 25 The graph of the linear function f has intercepts at $(a,0)$ and $(0,b)$ in the xy-plane. If $a+b=0$ and $a\neq{b}$ , which of the following is true about the slope of the graph of $f$? • A It is positive. • B It is negative. • C It equals zero. • D It is undefined. Question 26 The complete graph of the function f is shown in the xy-plane above. Which of the following are equal to 1 ? I. $f(−4)$ II.$f(\frac{3}{2}$) III. $f(3)$ • A III only • B I and III only • C II and III only • D I, II, and III Question 27 Two samples of water of equal mass are heated to 60 degrees Celsius (°C). One sample is poured into an insulated container, and the other sample is poured into a non-insulated container. The samples are then left for 70 minutes to cool in a room having a temperature of 25°C. The graph above shows the temperature of each sample at 10-minute intervals. Which of the following statements correctly compares the average rates at which the temperatures of the two samples change? • A In every 10-minute interval, the magnitude of the rate of change of temperature of the insulated sample is greater than that of the non-insulated sample. • B In every 10-minute interval, the magnitude of the rate of change of temperature of the non-insulated sample is greater than that of the insulated sample. • C In the intervals from 0 to 10 minutes and from 10 to 20 minutes, the rates of change of temperature of the insulated sample are of greater magnitude, whereas in the intervals from 40 to 50 minutes and from 50 to 60 minutes, the rates of change of temperature of the non-insulated sample are of greater magnitude. • D In the intervals from 0 to 10 minutes and from 10 to 20 minutes, the rates of change of temperature of the non-insulated sample are of greater magnitude, whereas in the intervals from 40 to 50 minutes and from 50 to 60 minutes, the rates of change of temperature of the insulated sample are of greater magnitude. Question 28 In the xy-plane above, ABCD is a square and point E is the center of the square. The coordinates of points C and E are (7,2) and (1,0), respectively. Which of the following is an equation of the line that passes through points B and D? • A $y=−3x−1$ • B $y = −3(x−1)$ • C y = −$\frac{1}{3}$x+4 • D y = −$\frac{1}{3}$x-1 Question 29 $y=3$ $y=ax^2+b$ In the system of equations above, a and b are constants. For which of the following values of a and b does the system of equations have exactly two real solutions? • A a = −2, b = 2 • B a = −2, b = 4 • C a = 2, b = 4 • D a = 4, b = 3 Question 30 The figure above shows a regular hexagon with sides of length a and a square with sides of length a. If the area of the hexagon is $384\sqrt{3}$square inches, what is the area, in square inches, of the square? • A 256 • B 192 • C $64\sqrt{3}$ • D $16\sqrt{3}$ Question 31 A coastal geologist estimates that a certain countrys beaches are eroding at a rate of 1.5 feet per year. According to the geologists estimate, how long will it take, in years, for the countrys beaches to erode by 21 feet? Question 32 If h hours and 30 minutes is equal to 450 munites, what is the value of h? Question 33 In the xy-plane, the point (3, 6) lies on the graph of the function f (x) = 3x2 − bx + 12. What is the value of b ? Question 34 In one semester, Doug and Laura spent a combined 250 hours in the tutoring lab. If Doug spent 40 more hours in the lab than Laura did, how many hours did Laura spend in the lab? Question 35 $a=18t+15$ Jane made an initial deposit to a savings account. Each week thereafter she deposited a fixed amount to the account. The equation above models the amount a, in dollars, that Jane has deposited after t weekly deposits. According to the model, how many dollars was Janes initial deposit? (Disregard the$ sign when gridding your answer.) Question 36 In the figure above, point O is the center of the circle, line segments LM and MN are tangent to the circle at points L and N, respectively, and the segments intersect at point M as shown. If the circumference of the circle is 96, what is the length of minor arc $\overset{\frown}{LN}$? Questions 37-38 are based on the following passage. A botanist is cultivating a rare species of plant in a controlled environment and currently has 3000 of these plants. The population of this species that the botanist expects to grow next year, Nnext year, can be estimated from the number of plants this year, Nthis year, by the equation below. $N_{next year}=N_{this year}+0.2(N_{this year})(1-\frac{N_{this year}}{K})$ The constant K in this formula is the number of plants the environment is able to support. Question 37 According to the formula, what will be the number of plants two years from now if K = 4000 ? (Round your answer to the nearest whole number.) Question 38 The botanist would like to increase the number of plants that the environment can support so that the population of the species will increase more rapidly. If the botanist`s goal is that the number of plants will increase from 3000 this year to 3360 next year, how many plants must the modified environment support? Reference Contents X X • 1 • 2 • 3 • 4 • 5 • 6 • 7 • 8 • 9 • 10 • 11 • 12 • 13 • 14 • 15 • 16 • 17 • 18 • 19 • 20 • 21 • 22 • 23 • 24 • 25 • 26 • 27 • 28 • 29 • 30 • 31 • 32 • 33 • 34 • 35 • 36 • 37 • 38
# [NTG-context] indenting \startlines... \stoplines Sun Jan 1 10:30:56 CET 2012 Am 31.12.2011 um 20:33 schrieb Pablo Rodríguez: > Hi there, > > I want to be a \startlines... \stoplines environment to be 10cm away > from the left margin. > > I know this is a basic question, but I have no idea how to do it. As the lines environment has no margin keys you need the narrower environment to increase the left margin. \definenarrower[narrowlines][left=10cm] \setuplines [before={\startnarrowlines[left]}, after=\stopnarrowlines] \starttext \showframe \startlines One Two Three \stoplines \stoptext Wolfgang
# A charge of 6 C passes through a circuit every 3 s. If the circuit can generate 4 W of power, what is the circuit's resistance? $\text{The current in the circuit"(I)="Charge passing through it"/"Time}$ $= \frac{6 C}{3 s} = 2 A$ Now $\text{Power} \left(P\right) = {I}^{2} \times R$, where R = Resistance So$R = \frac{P}{I} ^ 2 = \frac{4}{2} ^ 2 = 1 \Omega$