text
stringlengths 100
356k
|
---|
# 18x2 - 54x – 20Expand and simplify
###### Question:
18x2 - 54x – 20
Expand and simplify
### Give two reosons why both men and women could become victims of violence
Give two reosons why both men and women could become victims of violence...
### Look up the ratio and determine how much water is necessary to completely dissolve 200 grams of potassium
Look up the ratio and determine how much water is necessary to completely dissolve 200 grams of potassium chloride at 40°c? (at 40°c a saturated solution of kcl has what ratio of solvent to solute?...
### 1. What circumstances led to the event described in document 4?Article of confederation
1. What circumstances led to the event described in document 4? Article of confederation...
### 7+ z < 3 can someone me solve this inequality ?
7+ z < 3 can someone me solve this inequality ?...
### Determine the value of k that partitions a segment into a ratio of 1: 4
Determine the value of k that partitions a segment into a ratio of 1: 4...
Opal Production Company uses a standard costing system. The following information pertains to the current year: Actual factory overhead costs ($15,000 is fixed)$50,000 Actual direct labor costs (10,000 hours) $130,000 Standard direct labor for 6,000 units: Standard hours allowed 9,500 hours Lab... 12 answers ### In the standard (x, y) coordinate plane which equation represents a line through the point (6, 1) and perpendicular to the line with the equation =3/2 In the standard (x, y) coordinate plane which equation represents a line through the point (6, 1) and perpendicular to the line with the equation =3/2 + 1? A.) = −3/2 − 8 B. ) = −2/3 − 3 C.) = −2/3 + 1 D.) = −2/3 + 5 E.) = −3/2 + 10... 2 answers ### Marc agrees to sell Diana 500 copies of a book for$3.50 per book. Marc breaches the contract by not delivering the books. At
Marc agrees to sell Diana 500 copies of a book for $3.50 per book. Marc breaches the contract by not delivering the books. At the time of the breach, the books are available from the publisher for$4.50 each. Diana’s damages are:....
### Air traffic controller Seymour Plains must quickly calculate the altitude of an incoming jet. He records the jet’s angle of
Air traffic controller Seymour Plains must quickly calculate the altitude of an incoming jet. He records the jet’s angle of elevation as 8°. The jet signals that its land (horizontal) distance from the control tower is 74 km. Calculate the altitude of the jet to the nearest meter...
### Which word does not suggest good cheer? a. accolade b. cordial c. enervate d. sanguine
Which word does not suggest good cheer? a. accolade b. cordial c. enervate d. sanguine...
### Which of the following is a precaution a worker should take to avoid on-the-job fall hazards? a. assume that fall protection
Which of the following is a precaution a worker should take to avoid on-the-job fall hazards? a. assume that fall protection equipment and devices are safe before each use. b. attend and participate in fall prevention training. c. avoid using fall protection equipment that can slow down a worker’...
Please solve A and B. Please show your work and explain (20 points) $Please solve A and B. Please show your work and explain (20 points)$...
### Look closely at the phrase, 'now wished for the lastfriend, death, to relieve me.' What is Equiano
Look closely at the phrase, "now wished for the last friend, death, to relieve me." What is Equiano referring to? I was soon put down under the decks, and there I received such a salutation (greeting] in my nostrils as I had never experienced in my life: so that, with the loathsomeness (horriblenes...
### I need bff please i would really appreciate it
I need bff please i would really appreciate it...
### Which equation is shown on the graph? a. y = x – 3 b. y = x + 3 c. y = 3x d. y = 2x – 3
Which equation is shown on the graph? a. y = x – 3 b. y = x + 3 c. y = 3x d. y = 2x – 3 $Which equation is shown on the graph? a. y = x – 3 b. y = x + 3$...
Explain why \large \angle P is not an appropriate name for \large \angle2. Then give an appropriate name for \large \angle2. Giving 20 points! This is for geometry. $Explain why \large \angle P is not an appropriate name for \large \angle2. Then give an appropriate$...
### What is the passage from Declaration of Sentiments described asa: offensiveb: lamentingc: assuredd: disillusioned
What is the passage from Declaration of Sentiments described as a: offensiveb: lamentingc: assuredd: disillusioned...
### La escritura correcta en el SI debe ser caracteres
La escritura correcta en el SI debe ser caracteres...
Need help with this as soon as possible. $Need help with this as soon as possible.$...
At December 31, 2019, before any year-end adjustments, Karr Company's Insurance Expense account had a balance of $1,450 and its Prepaid Insurance account had a balance of$3,800. It was determined that \$2,000 of the Prepaid Insurance had expired. The adjusted balance for Insurance Expense for the ye... |
# How to predict next failure based on past measurements and failures?
I'm working with a vacuum chamber. I have one test per day in which we test if the chamber's seals are still good enough for operation. In each test I have measurements every 60 seconds for 5 minutes (0, 60, 120, 180, 240, 300 seconds). I know that if the vacuum difference between one measurement and the prior one is bigger than 0.04 the test fails.
This would be a compact dataset: if the last column is a 0 the test went OK. If it is a 1 the test failed.
My goal is to determine when the machine will fail the test, so that the technical team can do some predictive maintenance before the machine breaks down. I started working with survival analysis in R to try to get somewhere, but I'm getting confused since the result of the test (failed or OK) depends on the relation between measurements on that same test.
What would be the best approach for this?
• "I have one test per day" — From the dates in your screenshot, it looks like tests aren't run every day, and sometimes there are two tests in one day. Mar 12, 2017 at 18:51
• That is a compact example of my database and that is why some days might be missing. For the days with two tests: when one test failes the technician services the machine and then the test is run again to see if the result is "OK" Mar 13, 2017 at 9:27
• 10/05/11 has two tests even though both passed. Mar 13, 2017 at 14:25
• You are right. Do you have any suggestion on best practices for predicticting failed tests? Mar 13, 2017 at 14:31
• Not really. I do a lot with predictive models, but my skills are in longitudinal-style analysis, where you have lots of subjects, only a few timepoints each, and a bunch of features you can use for prediction. Here you have only one subject, a lot of timepoints, and no features other than time; this is the usual scenario in time-series analysis. You should probably transform your data to a 1-dimensional time series by replacing each quintent of measurements with $\max\{|t_0 - t_{60}|, |t_{60} - t_{120}|, …, |t_{240} - t_{300}|\}$; then you just need to predict when this hits .04. Mar 13, 2017 at 15:46 |
# Decomposition of Artin L functions
The Dedekind zeta function of an abelian extension $E$ of $\mathbb{Q}$ factors as a product of Artin L function $L(s, \chi)$, where the product runs over all irreducible representations $\chi$ of $Gal(E : \mathbb{Q})$ .
Question: What is known for irreducible representation $\sigma$ of $G(F) = Gal(\overline{Q}, F)$. How does the Artin $L$ function decompose? Something like $$L_F(s, \sigma) = \prod\limits_{\sigma' \subset Ind_{G(F)}^{G(E)} \sigma} L_E(s, \sigma'),$$ where $F$ is a finite extension of $E$?
-
Surely the fact that the Dedekind zeta function of $E$ factorises is simply a restatement of the fact that the regular representation of a group $G$ factorises as a product over all the irreducible representations of $G$? Artin reciprocity is more the statement that any L-function associated to a one-dimensional representation is in fact a Hecke L-function. – Daniel Loughran Jun 8 '11 at 9:42
Yes, this is true, and is a standard consequence of the inductivity properties of Artin L-functions. – David Hansen Jun 8 '11 at 10:21
Anywhere Artin L-functions are sold. – David Hansen Jun 8 '11 at 10:26
For example, try taking a look at Chapter 2 of M. Ram Murty & V. Kumar Murty, Non-vanishing of L-functions and applications, Birkhauser 1997. – Stefano V. Jun 8 '11 at 11:55
For $F$ a number field, the above product is incorrect. For example it would imply that $1=\prod_{\sigma′\in Ind^{G(E)}_{G(F)} \setminus \{\sigma\}} L(s,\sigma′,F)$, which is false unless $F=\mathbb{Q}$. What is true however is that $L(s,\sigma,F) = L(s, Ind^{G(E)}_{G(F)}(\sigma), E)$ for any $E\subset F.$ – JSpecter Jun 8 '11 at 13:24
You should define what you mean by a decomposition of an Artin $L$-function. If you assume standard conjectures of Langlands and Selberg, then the Artin $L$-function of an irreducible representation of $G(\mathbb{Q})$ is a primitive function in the Selberg class, hence it has no nontrivial decomposition there (or among Artin $L$-functions for that matter). If you start with an irreducible representation $\sigma$ of $G(F)$, then $L_F(s,\sigma)=\prod_\rho L_\mathbb{Q}(s,\rho)^{m(\rho)}$, where $\rho$ runs through the irreducible representations of $G(\mathbb{Q})$ and $m(\rho)$ denotes the multiplicity of $\rho$ in the induced representation of $\sigma$ from $G(F)$ to $G(\mathbb{Q})$. This should be the unique maximal factorization into $L$-functions over $\mathbb{Q}$. In particular, if $F/\mathbb{Q}$ is Galois, then $L_F(s,\sigma)$ should be "irreducible". For a reference I recommend Murty's paper here.
Thinking from the automorphic point of view, it should be an $L$-function over $\mathbb Q$ corresponding to the automorphic induction of the Hecke character to a representation $\Pi$ of $GL_N(\mathbb A_{\mathbb Q})$ where $N=deg(E:\mathbb Q)$. However, automorphic induction is only known for cyclic (whence solvable) and non-normal cubic extensions.
Depending on your extension and what you induce, it may be that $\Pi$ is cuspidal, but in general it should break up as an isobaric sum $$\Pi = \pi_1 \boxplus \cdots \boxplus \pi_r,$$ where the $\pi_i$'s are cuspidal. ($r=1$ and $\pi_1 = \Pi$ if $\Pi$ is cuspidal). Then your desired $L$-function decomposition over $\mathbb Q$ is $$L(s,\pi_1)L(s,\pi_2) \cdots L(s,\pi_r).$$
In the case of the Dedekind zeta function for an abelian extension, these $\pi_i$'s correspond to the irreducible characters of Gal$(E/\mathbb Q)$. For more general extensions, one knows this when each irreducible representation of Gal$(E/\mathbb Q)$ is known to be modular (corresponding to a cuspidal automorphic representation). |
• 论文及研究报告 •
### 辽西半干旱地区栎树人工造林技术的研究
1. 沈阳农业大学林学院,沈阳110161;辽宁省凌源市五家子国合造林站,凌源122507;辽宁省凌源市河东国合造林站,凌源122500;辽宁省朝阳县朝阳林场,朝阳122000
• 收稿日期:2000-05-12 修回日期:2003-02-27 出版日期:2003-11-25 发布日期:2003-11-25
### STUDIES ON THE AFFORESTATION TECHNIQUES FOR THREE OAK SPECIES INTHE SEMIARID AREAS OF WESTERN LIAONING PROVINCE
Cui Jianguo,Cui Wenshan,Bai Ruixing,Li Demin Lan,Xianzhen
1. College of Forestry, Shenyang Agricultural University Shenyang 110161;Wujiazi Afforestation Station of Lingyuan,Liaoning Province Lingyuan122507;Hedong Afforestation Station of Lingyuan, Laioning Province Lingyuan122500;Chaoyang Forest Farm of Chaoyang County,Liaoning Province Chaoyang122000
• Received:2000-05-12 Revised:2003-02-27 Online:2003-11-25 Published:2003-11-25
Abstract:
Pinus tabulaeformis forest mixed with deciduous oak species such as Quercus mongolica and Q.liaotungensis is the zonal native vegetation in Western Liaoning Province. However, the present artificial vegetation dominated in Western Liaoning Province is monocultures of P.tabulaeformis. It is of great significance to introduce oak species into the pure P.tabulaeformis plantations to promote the restoration of P.tabulaeformis forest mixed with oak species and the sustainable management of the pure P.tabulaeformis plantations. The objective of this study was to investigate the afforestation techniques for Q.mongolica, Q.liaotungensis and Q.acutissima under the semiarid conditions in Western Liaoning Province. In view of the natural conditions in Western Liaoning Province, a series of direct seeding and planting experiments for oak species under different shading conditions such as in the cutting strip of P.tabulaeformis stand, under the canopy of P.tabulaeformis stand and in the open stand of P.tabulaeformis were carried out. The results showed that oak growth was characterized by growth of root system during the first 3~4 years after seeding or planting. However, it was dominated by the elongation of the taproot while the absorptive root was too less to maintain balance of water contents between the above and below ground parts of the oak seedlings and saplings. This was the main reason for the failure of oak afforestation in wild land in Western Liaoning Province. Under shading conditions, transpiration rate of oak seedlings decreased greatly because of the drastic reduction of light intensity, which reduced the waste of water in summer, while in winter and at the end of winter and at the beginning of spring, seedling death rate resulted from top_drying or withered stem after physiological drought was greatly reduced. Both situations contributed to the high survival rate and satisfactory growth of oak seedlings and saplings. The survival rate could be as high as 100% in the first few years when afforestation was done under closed canopy of P.tabulaeformis stand, but the growth increment of height and diameter was less. This phenomenon became more obvious with increase of age of the seedlings. The survival rate of oak saplings at the age of 6 years was more than 90%; the average basal diameter of trees ranged from 0.45 to 0.95 cm; the average tree height from 19.5 to 32.3 cm; the root system reached a depth of 70.5 cm, the width in diameter was 1.728 cm, and the width in diameter where the root was broken was 0.344 cm. With the increase in number of lateral and fibrous roots and the enhancement of their adsorptive capability, the oak saplings was at a stage of stable growth from 6~7 years onwards. Based on this study, it was concluded that the following set of techniques should be followed to guarantee the success of artificial oak afforestation in Western Liaoning Province:(1)Afforestation under shelter such as in the cutting strip of P.tabulaeformis stand, under the canopy of P.tabulaeformis stand and in the open stand of P.tabulaeformis, but afforestation under closed canopy of P.tabulaeformis stand should be avoided; (2) Direct seeding in autumn with careful site preparation, seed screening and classification before seeding, and a favorable mulching soil layer depth of 5~8 cm, but places where were heavily damaged by mouse or hare should be avoided; (3)Selection of site with a thick soil layer depth; and (4) Careful tending, particularly stumping of saplings.
Key words: Semiarid areas, Quercus mongolica, Q.liaotungensis, Q.acutissima, Afforestation techniques |
southwest
A quick way to alter the database schema based on model changes for an app which doesn't (yet) need the power of full south migrations.
Usage
Initially, you'll need to set up the initial definitions for all (non-south) apps:
manage.py syncdb
Then after making a model change to an app you can see the changes, and optionally apply them (requires confirmation) by using the new alterdb command:
manage.py alterdb myapp
If you'd rather just see the SQL than alter the database directly:
manage.py alterdb myapp --sql
If you have manually altered the database and would like to forget the old model definition for an app, you can reset the definition:
manage.py alterdb myapp --reset |
## Intermediate Algebra (12th Edition)
$m=14$
$\bf{\text{Solution Outline:}}$ To solve the given radical equation, $\sqrt[3]{2m-1}=\sqrt[3]{m+13} ,$ raise both sides of the equation to the third power. Then use properties of equality to isolate and solve the variable. Finally, do checking of the solution/s with the original equation. $\bf{\text{Solution Details:}}$ Raising both sides of the equation to the third power results to \begin{array}{l}\require{cancel} \left( \sqrt[3]{2m-1} \right)^3=\left( \sqrt[3]{m+13} \right)^3 \\\\ 2m-1=m+13 .\end{array} Using the properties of equality to isolate the variable results to \begin{array}{l}\require{cancel} 2m-m=13+1 \\\\ m=14 .\end{array} Upon checking, $m=14$ satisfies the original equation. |
Opened 7 years ago
Closed 6 years ago
# 'Cannot operate on a closed cursor' exception causes db to appear as if upgrade is needed
Reported by: Owned by: Daniel Abel Alec Thomas normal TagsPlugin blocker cursor, ProgrammingError, closed Steffen Hoffmann 0.12
I'm trying to configure an existing Trac instance to work from under virtualenv (trac 0.12 with SQLite, wsgi and apache). I ran into a situation where, after running trac-admin /path/to/env upgrade and wiki upgrade, trac, when being called by the webserver says that I have to run the upgrade (i.e. shows an error page to this effect). Running upgrade again on the command line says no upgrade necessary. That is, running trac from command-line and having it called from the webserver (via wsgi) gives two different results to the need to be upgraded? question.
I managed to track this down to the fact that TagModelProvider's environment_needs_upgrade returns True when being called from wsgi, and False when being called from the console. The problem appears to be that the cursor.execute('select count(*) from tags' ) line in there throws an exception, but not because the db is old, but because Cannot operate on a closed cursor.
Moving the cursor = db.cursor() line to below the 'if self._need_migration(db)', ie. to just above the try/except appears to solve this issue.
So, suggestions:
1. the except: line in environment_needs_upgrade() shouldn't be so greedy, i.e. it shouldn't swallow all exceptions (if possible to filter out those caused by an old database)
2. the 'cursor=db.cursor()' line should be moved to just above where it is first used.
Note that I assume the situation I ran into would be very difficult to reproduce, but the suggestions above won't have any drawbacks, and just help make the code cleaner.
### comment:1 Changed 7 years ago by Ryan J Ollos
Summary: 'Cannot operate on a closed cursor' eception causes db to appear as if upgrade is needed → 'Cannot operate on a closed cursor' exception causes db to appear as if upgrade is needed
### comment:2 Changed 6 years ago by Robert Corsaro
I experience the same issue on a fresh install of trac running with tracd. The described fix (moving the cursor under the need_migration line) fixes it for me.
### comment:3 Changed 6 years ago by Robert Corsaro
Severity: normal → blocker
### comment:5 follow-up: 7 Changed 6 years ago by Ryan J Ollos
Cc: Steffen Hoffmann added; anonymous removed
#5345 appears to be a duplicate. Or, at least much of what is discussed in the comments of that ticket appears to be a duplicate issue.
### comment:6 Changed 6 years ago by Ryan J Ollos
Description: modified (diff)
In particular comment:5:ticket:5345 describes essentially the same fix, and several users report that the fix has successfully worked for them. The trunk has since been modified to move cursor = db.cursor() below if self._need_migration(db):, so it seems unlikely that the issue is that _need_migration is creating another cursor and forcing the close of the cursor in environment_needs_upgrade (see also [5655] and #4996). However, from what I've seen in this and other tickets, it seems to be important that the cursor is obtained within the try block.
I'm still trying to understand this issue, but as far as I can tell it would be at worst case harmless to move cursor = db.cursor() to within the try block.
### comment:7 in reply to: 5 Changed 6 years ago by Steffen Hoffmann
Keywords: cursor added → duplicate new → closed
#5345 appears to be a duplicate. Or, at least much of what is discussed in the comments of that ticket appears to be a duplicate issue.
Confirmed, as you found yourself according to your findings in the following comment. Let's make this trivial fix, before we get even more reasonable complaints, NOW. Following up over there... |
# Force on pneumatic cylinder [closed]
A pneumatic cylinder is kept vertically straight on a weighing scale, which is set at 0 for the setup.
now a force of 1000 N is applied on the cylinder.
what is the reading of scale.
assume the outlet/ inlet to be blocked hence the mass of air inside the cylinder shall remain constant all the times and cylinder can move in y(vertical) direction by compressing the air inside it on application of external force.
let diameter of bore of cylinder be 60 mm and stroke be 50 mm.
will the reading be less than 1000N or less or more??
more info can be supplied if required.
-
## closed as too localized by Qmechanic♦, Manishearth♦, Sklivvz♦Jan 3 '13 at 13:27
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.If this question can be reworded to fit the rules in the help center, please edit the question.
I'm sure David will step in as well, but let me be the first to welcome you to the physics SE. :) – AdamRedwine Feb 2 '12 at 15:49
Please see our homework policy. We expect homework problems to have some effort put into them, and deal with conceptual issues. If you edit your question to explain (1) What you have tried, (2) the concept you have trouble with, and (3) your level of understanding, I'll be happy to reopen this. (Flag this message for ♦ attention with a custom message, or reply to me in the comments with @Manishearth to notify me) – Manishearth Jan 3 '13 at 13:27
@Manishearth It was not a homework,one of the moderators tagged it with HW tag, not me there was some confusion regarding the cylinder forces between me and my friend ... and in absence of any guidance, we decided to post here. Is it wrong to ask your doubts here or should we ask only super smart questions?? I believe no question is stupid question.actually it was all abt the buckling strength & weight experienced by the components below the cylinder fitted on my robot,its a different story,you reopen it or not, is your choice, but perhaps i am not so intelligent, forgive me for my ignorance. – user7476 Jan 4 '13 at 14:09
@user7476: Just to be clear (don't worry, this isn't evident to most), the homework policy applies to all such questions ("given data X, find Y"). This still seems salvageable, though -- give a bit of your own discussion (how you've tried to solve the confusion between yourselves), and then ping me :) – Manishearth Jan 4 '13 at 19:10 |
# Confusion
Logic Level 1
$\large 3 \square 3=1$
which sign $$(+ \ , \times \ , -)$$ to make the equation above true?
× |
# Using as little pins as possible for multiple buttons
I'm using an Arduino Nano to make some sort of "console" and I'm currently designing the controller. However, I would like at least two of them with 6 buttons each. I have reserved 4 pins for the VGA output so there simply aren't enough pins.
The controller only consists of the 6 buttons, with a cable going back to the arduino. The cable must include power and ground. I have considered using radio but that's out of scope for me at the moment.
• That's a 3x4 matrix. You need seven pins or external circitry like an I2C multiplexer. – winny Jul 23 '16 at 10:38
• Does the nano include an ADC? Do you need buttons to be pressed simultaneously? If yes, how many? – Vladimir Cravero Jul 23 '16 at 10:42
• The nano includes several analog pins(If thats what you mean) and yes, i would like them to be pressed simultaneously. – James Pae Jul 23 '16 at 10:45
• and how many bits do you have? 8 bits should be enough. I am writing an answer though. – Vladimir Cravero Jul 23 '16 at 10:49
• @VladimirCravero Unsure, But it uses a ATmega328. – James Pae Jul 23 '16 at 10:52
You could use the good, old, R2R DAC:
simulate this circuit – Schematic created using CircuitLab
All r should be small enough wrt to R to pull down the push button output when it is released. Some good values you can start with are:
• $r=1k\Omega$
• $R=47k\Omega$
• $2R=100k\Omega$
Better yet, use one $100k\Omega$ resistor for R and two of them in series for the 2R resistor.
I will not dig in how this circuit works, refer to wikipedia for an explaination, what you want to know is that if we associate a digital number to the push button states, the output voltage is an analog representation of such number.
If the button is pressed, we say it is 1, if it is not pressed we say it is 0. SW1 is the LSB, i.e. the rightmost bit, while SW6 is the MSB, i.e. the leftmost.
As an example, if you press SW6, 3 and 2 the digital number is $D=100110$, which in decimal is 38.
The output voltage depends on $V_{dd}$ and is: $$V_{out}=V_{dd}\frac{D}{2^N}$$ where N is the number of switches, in this case 6.
You can sample the analog voltage with the arduino ADC and then convert it back to a digital representation of the status.
Since you have an eight bit converter, the routine will be pretty neat:
uint_8t sw_status;
while(1) {
// do something with sw_status. the lsb represents SW1, and so on, e.g.:
if (sw_status & (1 << 3)) {
//SW3 is pressed here!
}
}
There are some problems with this. First of all, you need precise resistors, or you will get non monotonic behavior which is very bad. The ADC on the nano should be precise enough, but you need to check if it accepts rail to rail inputs. And you get some error because you do not have SPDT push buttons, and you need to use the pull down. Moreover, if you have long wires going to the console, you will probably need to buffer the signal to avoid spurious reads.
All in all this works, requires not many components but is suboptimal at the least. I would definitely go for i2c IO expanders. But this solution is pretty neat and a bit out of the box, and deserves some evaluating.
• you missed the bit where you loop until the reading is stable, – Jasen Jul 23 '16 at 11:21
• Yeah, and debouncing, and a ton of other things. It is just an (hopefully valuable) input, not a complete design. – Vladimir Cravero Jul 23 '16 at 11:23
• I presume you measure the input voltage. Looks some-what similar to the marching square algorithm. – James Pae Jul 23 '16 at 11:41
• This is really a nice circuit for DAC – but it's nothing you'd want to build if you don't have a set of good (read: low-tolerance) 5x R and 7x 2R resistors lying around (or 19x R with low tolerance, because 2R is really the same as two R resistors in series). – Marcus Müller Jul 23 '16 at 12:14
• You can use resistor arrays. – TEMLIB Jul 23 '16 at 13:13
The classical approach here is to use some kind of IO extender IC. In the simplest case, this would simply be a parallel-to-serial-shift register, for example some variant of the 74xx165:
The picture is from the Texas Instruments SN74LV165A datasheet.
The idea is the following: When the SH pin is pulled low, the values at pins A-H are stored in the internal flipflops, so this is where you connect your switches/buttons that either connect the pin directly to supply voltage (if pressed, for example), or ground the pin throug a (largish) resistor:
simulate this circuit – Schematic created using CircuitLab
That way, when the button is pressed, the voltage at the input is "high", otherwise it's low. It's usually recommendable to connect a capacitor in parallel to the resistor, because that will counter so-called "bouncing". The actual resistor and capacitor values aren't critical – if you use a resistor value that's too small, you'll draw a lot of current with every button pressed, and if you choose a capacitor value too large, buttons will appear to take a long time until they are effectively pressed and released, but in general, something like 10kOhm and 10nF – 100nF, values abundant in most part boxes, should do well for human interaction.
Yet another hint: if you need to go shopping for parts, buy so-called resistor networks; for example, there's parts that contain 8 identical resistors that have one common pin (which you'd connect to ground here, and 8 individual pins. That way, you save yourself a lot of soldering work, and your circuit can look very clean and tidy.
Now, having pulled SH "high" again, the input values are stored in the flipflops.
Now your microcontroller would start to send a clock signal (high,low,high,low,…) to the CLK input. At every rising edge (i.e. low-to-high transition), an input value appears at QH, and the internal states are pushed one step "to the right", meaning that the second flipflop then contains the old value of the first, the third flipflop the old value of the second and so on. After six clock cycles, all the 8 input values have been sequentially shown on QH. That's why this shift register is usually also called a parallel to serial converting shift register.
You can actually daisy-chain those: if you take a second shift register and attach its QH to your shift register SER input, then the output values will be "concatenated" to your first shift registers values. That way, with only three pins on your Arduino (one pin driving SH, one pin driving CLK, and one pin reading the serial output from QH), you can have virtually unlimited amounts of buttons – and that at a unit price of less than 20ct per 8 inputs
The only limiting part is that you regularly have to pull down SH, pull it back up, and generate 8*(N_shiftregisters) clock cycles; this has to be done often enough not to miss a key press – but usually, with microcontrollers running at MHz's, and with SPI hardware that actually is meant for nothing different than talking to such shift registers, this isn't a problem at all – it's not uncommon to see someone query shift registers a couple thousand times per second.
• capacitors like that cause increased switch wear. if you must do analogue debounce use positive feedback. else do software debounce. – Jasen Jul 24 '16 at 11:35
• @Jasen Haven't heard of that argument – you mean because of the high charge current flowing through the switch when pressed? Yes, in theory, there should be a resistor between the switch and the capacitor, but in practice, you can often do well enough by designing your traces thin enough to not let the current go through the roof. Or you assume that keypad switches are robust enough and they'd be attached with non-superconductive cabling, anyway (I'd be more worried about parasitic inductance leading to voltage spikes here, to be honest, than about the current flowing). – Marcus Müller Jul 24 '16 at 11:52
• @Jasen hope my edited circuit explains that well enough. – Marcus Müller Jul 24 '16 at 11:57
If you have 3 I/O pins, you could charlieplex them.
(You should add a pull down resistor before each I/O pin.)
Now as a note, you have to test all the possible combinations (6 in total) before determining which buttons are pressed and which ones are not. If you activity one pair of pins and find that there is continuity, there are three possibilities, where current could flow through two pressed down buttons (kinda in series), one pressed down button or all 3.
This is what I mean:
Also, ensure you DO NOT set any OUTPUT to LOW. If you want to prevent something from burning, put like a 1kΩ resistor at each terminal. But that would unintentionally create a voltage divider.
• I think you need to have pull-downs on the inputs to not have floating inputs when none of the buttons are pressed. ATmega has internal pull-ups, though, so you could invert the logic and enable pull-ups on all but one pin at a time, and drive that one pin low. (Without using external pull-up/pull-down resistors) – ilkkachu Jul 23 '16 at 12:17
• @ilkkachu Oh wow thanks for telling me that I will edit the question. – Bradman175 Jul 23 '16 at 12:29
• I'm going to ask a silly question. Will the internal PULLUP accidentally power the circuit? – Bradman175 Jul 23 '16 at 12:38
• the pull-ups should pull the pins high, except for the one actively driven low or the ones connected to the low pin via the switches and diodes. There isn't really anything to be powered in that circuit, and the pull-up is weak anyway. You'd just need to interpret a high value the default and a low value the signal for a button press. (And also consider that the current through the diodes would be inverted too.) Other than that, I don't think it should matter if you have a pull-up or a pull-down. – ilkkachu Jul 23 '16 at 12:58
You state parameters of:
• At least six switches
• At least two controllers
• Arduino Nano-compatible
• Using as few I/O pins as possible
Using a digital interface instead of an analog one will give you improved noise tolerance and eliminate accuracy requirements of DAC resistors.
If you search for "I2C keypad controller", you will get several possibilities.
Features of a keypad controller which you might want to take into consideration when selecting a part are inbuilt key debouncing (makes your software/hardware a bit simpler), ESD protection (the user could be a cat), and a user-definable address (so you can have more than one on the same I/O pins).
One of them is the MAX7360 Keypad Controller (this is not a product recommendation as I have no experience with it, and EE.SE is not a product recommendation site).
If you find that your selected device prefers to work with a 3.3 V I²C bus but your Nano works with 5 V, you can use a level-shifter.
Another point to consider is if you can physically solder the device - some of the ones I found use a BGA package. If you have a hot-air soldering station then that will be usable for you, otherwise go for one with pins at the edges.
• By far the lowest effort and most likely to be done perfect approach! – Marcus Müller Jul 24 '16 at 19:19
• @MarcusMüller I feel like it's having a programming language with a ready-made "SpaceInvaders" instruction :-| (Previous comments of mine removed.) – Andrew Morton Jul 24 '16 at 19:27 |
Comment by capybaralet on Imitation learning considered unsafe? · 2019-01-09T18:27:26.934Z · score: 1 (1 votes) · LW · GW
I don't think I'd put it that way (although I'm not saying it's inaccurate). See my comments RE "safety via myopia" and "inner optimizers".
Comment by capybaralet on Imitation learning considered unsafe? · 2019-01-09T18:22:42.679Z · score: 6 (3 votes) · LW · GW
Yes, maybe? Elaborating...
I'm not sure how well this fits into the category of "inner optimizers"; I'm still organizing my thoughts on that (aiming to finish doing so within the week...). I'm also not sure that people are thinking about inner optimizers in the right way.
Also, note that the thing being imitated doesn't have to be a human.
OTTMH, I'd say:
• This seems more general in the sense that it isn't some "subprocess" of the whole system that becomes a dangerous planning process.
• This seems more specific in the sense that the boldest argument for inner optimizers is, I think, that they should appear in effectively any optimization problem when there's enough optimization pressure.
Comment by capybaralet on Imitation learning considered unsafe? · 2019-01-07T15:55:46.050Z · score: 4 (2 votes) · LW · GW
See the clarifying note in the OP. I don't think this is about imitating humans, per se.
The more general framing I'd use is WRT "safety via myopia" (something I've been working on in the past year). There is an intuition that supervised learning (e.g. via SGD as is common practice in current ML) is quite safe, because it doesn't have any built-in incentive to influence the world (resulting in instrumental goals); it just seeks to yield good performance on the training data, learning in a myopic sense to improve it's performance on the present input. I think this intuition has some validity, but also might lead to a false sense of confidence that such systems are safe, when in fact they may end up behaving as if they *do* seek to influence the world, depending on the task they are trained on (ETA: and other details of the learning algorithm, e.g. outer-loop optimization and model choice).
Comment by capybaralet on Assuming we've solved X, could we do Y... · 2019-01-07T15:39:48.509Z · score: 1 (1 votes) · LW · GW
Aha, OK. So I either misunderstand or disagree with that.
I think SHF (at least most examples) have the human as "CEO" with AIs as "advisers", and thus the human can chose to ignore all of the advice and make the decision unaided.
Comment by capybaralet on Imitation learning considered unsafe? · 2019-01-07T15:31:57.847Z · score: 1 (1 votes) · LW · GW
I think I disagree pretty broadly with the assumptions/framing of your comment, although not necessarily the specific claims.
1) I don't think it's realistic to imagine we have "indistinguishable imitation" with an idealized discriminator. It might be possible in the future, and it might be worth considering to make intellectual progress, but I'm not expecting it to happen on a deadline. So I'm talking about what I expect might be a practical problem if we actually try to build systems that imitate humans in the coming decades.
2) I wouldn't say "decision theory"; I think that's a bit of a red herring. What I'm talking about is the policy.
3) I'm not sure the link you are trying to make to the "universal prior is malign" ideas. But I'll draw my own connection. I do think the core of the argument I'm making results from an intuitive idea of what a simplicity prior looks like, and its propensity to favor something more like a planning process over something more like a lookup table.
## Imitation learning considered unsafe?
2019-01-06T15:48:36.078Z · score: 9 (4 votes)
Comment by capybaralet on Assuming we've solved X, could we do Y... · 2019-01-06T15:13:06.375Z · score: 1 (1 votes) · LW · GW
OK, so it sounds like your argument why SHF can't do ALD is (a specific, technical version of) the same argument that I mentioned in my last response. Can you confirm?
Comment by capybaralet on Conceptual Analysis for AI Alignment · 2018-12-30T21:58:25.522Z · score: 1 (1 votes) · LW · GW
I intended to make that clear in the "Concretely, I imagine a project around this with the following stages (each yielding at least one publication)" section. The TL;DR is: do a literature review of analytic philosophy research on (e.g.) honesty.
Comment by capybaralet on Assuming we've solved X, could we do Y... · 2018-12-30T21:56:30.356Z · score: 1 (1 votes) · LW · GW
Yes, please try to clarify. In particular, I don't understand your "|" notation (as in "S|Output").
I realized that I was a bit confused in what I said earlier. I think it's clear that (proposed) SHF schemes should be able to do at least as well as a human, given the same amount of time, because they have human "on top" (as "CEO") who can merely ignore all the AI helpers(/underlings).
But now I can also see an argument for why SHF couldn't do ALD, if it doesn't have arbitrarily long to deliberate: there would need to be some parallelism/decomposition in SHF, and that might not work well/perfectly for all problems.
## Conceptual Analysis for AI Alignment
2018-12-30T00:46:38.014Z · score: 17 (8 votes)
Comment by capybaralet on Assuming we've solved X, could we do Y... · 2018-12-27T04:42:09.568Z · score: 1 (1 votes) · LW · GW
Regarding the question of how to do empirical work on this topic: I remember there being one thing which seemed potentially interesting, but I couldn't find it in my notes (yet).
RE the rest of your comment: I guess you are taking issue with the complexity theory analogy; is that correct? An example hypothetical TDMP I used is "arbitrarily long deliberation" (ALD), i.e. a single human is allowed as long as they want to make the decision (I don't think that's a perfect "target" for alignment, but it seems like a reasonable starting point). I don't see why ALD would (even potentially) "do something that can't be approximated by SHF-schemes", since those schemes still have the human making a decision.
"Or was the discussion more about, assuming we have theoretical reasons to think that SHF-schemes can approximate TDMP, how to test it empirically?" <-- yes, IIUC.
Comment by capybaralet on Survey: What's the most negative*plausible cryonics-works story that you know? · 2018-12-19T22:42:54.714Z · score: 1 (1 votes) · LW · GW
I'd suggest separating these two scenarios, based on the way the comments are meant to work according to the OP.
Comment by capybaralet on Assuming we've solved X, could we do Y... · 2018-12-17T04:43:41.963Z · score: 1 (1 votes) · LW · GW
I actually don't understand why you say they can't be fully disentangled.
IIRC, it seemed to me during the discussion that your main objection was around whether (e.g.) "arbitrarily long deliberation (ALD)" was (or could be) fully specified in a way that accounts properly for things like deception, manipulation, etc. More concretely, I think you mentioned the possibility of an AI affecting the deliberation process in an undesirable way.
But I think it's reasonable to assume (within the bounds of a discussion) that there is a non-terrible way (in principle) to specify things like "manipulation". So do you disagree? Or is your objection something else entirely?
Comment by capybaralet on Assuming we've solved X, could we do Y... · 2018-12-12T19:20:36.102Z · score: 6 (3 votes) · LW · GW
Hey, David here!
Just writing to give some context... The point of this session was to discuss an issue I see with "super-human feedback (SHF)" schemes (e.g. debate, amplification, recursive reward modelling) that use helper AIs to inform human judgments. I guess there was more of an inferential gap going into the session than I expected, so for background: let's consider the complexity theory viewpoint in feedback (as discussed in section 2.2 of "AI safety via debate"). This implicitly assumes that we have access to a trusted (e.g. human) decision making process (TDMP), sweeping the issues that Stuart mentions under the rug.
Under this view, the goal of SHF is to efficiently emulate the TDMP, accelerating the decision-making. For example, we'd like an agent trained with SHF to be able to quickly (e.g. in a matter of seconds) make decisions that would take the TDMP billions of years to decide. But we don't aim to change the decisions.
Now, the issue I mentioned is: there doesn't seem to be any way to evaluate whether the SHF-trained agent is faithfully emulating the TDMP's decisions on such problems. It seems like, naively, the best we can do is train on problems where the TDMP can make decisions quickly, so that we can use its decisions as ground truth; then we just hope that it generalizes appropriately to the decisions that take TDMP billions of years. And the point of the session was to see if people have ideas for how to do less naive experiments that would allow us to increase our confidence that a SHF-scheme would yield safe generalization to these more difficult decisions.
Imagine there are 2 copies of me, A and B. A makes a decision with some helper AIs, and independently, B makes a decision without their help. A and B make different decisions. Who do we trust? I'm more ready to trust B, since I'm worried about the helper AIs having an undesirable influence on A's decision-making.
--------------------------------------------------------------------
...So questions of how to define human preferences or values seem mostly orthogonal to this question, which is why I want to assume them away. However, our discussion did make me consider more that I was making an implicit assumption (and this seems hard to avoid), that there was some idealized decision-making process that is assumed to be "what we want". I'm relatively comfortable with trusting idealized versions of "behavioral cloning/imitation/supervised learning" (P) or "(myopic) reinforcement learning/preference learning" (NP), compared with the SHF-schemes (PSPACE).
One insight I gleaned from our discussion is the usefulness of disentangling:
• an idealized process for *defining* "what we want" (HCH was mentioned as potentially a better model of this than "a single human given as long as they want to think about the decision" (which was what I proposed using, for the purposes of the discussion)).
• a means of *approximating* that definition.
From this perspective, the discussion topic was: how can we gain empirical evidence for/against this question: "Assuming that the output of a human's indefinite deliberation is a good definition of 'what they want', do SHF-schemes do a good/safe job of approximating that?"
Comment by capybaralet on Disambiguating "alignment" and related notions · 2018-11-26T06:55:58.050Z · score: 1 (1 votes) · LW · GW
So I discovered that Paul Christiano already made a very similar distinction to the holistic/parochial one here:
https://ai-alignment.com/ambitious-vs-narrow-value-learning-99bd0c59847e
ambitious ~ holistic
narrow ~ parochial
Someone also suggested simply using general/narrow instead of holistic/parochial.
Comment by capybaralet on Notification update and PM fixes · 2018-08-15T16:01:45.520Z · score: 1 (1 votes) · LW · GW
Has it been rolled out yet? I would really like this feature.
RE spamming: certainly they can be disabled by default, and you can have an unsubscribe button at the bottom of every email?
Comment by capybaralet on Safely and usefully spectating on AIs optimizing over toy worlds · 2018-08-15T15:49:12.296Z · score: 1 (1 votes) · LW · GW
I view this as a capability control technique, highly analogous to running a supervised learning algorithm where a reinforcement learning algorithm is expected to perform better. Intuitively, it seems like there should be a spectrum of options between (e.g.) supervised learning and reinforcement learning that would allow one to make more fine-grained safety-performance trade-offs.
I'm very optimistic about this approach of doing "capability control" by making less agent-y AI systems. If done properly, I think it could allow us to build systems that have no instrumental incentives to create subagents (although we'd still need to worry about "accidental" creation of subagents and (e.g. evolutionary) optimization pressures for their creation).
I would like to see this fleshed out as much as possible. This idea is somewhat intuitive, but it's hard to tell if it is coherent, or how to formalize it.
P.S. Is this the same as "platonic goals"? Could you include references to previous thought on the topic?
Comment by capybaralet on Disambiguating "alignment" and related notions · 2018-06-10T14:31:55.309Z · score: 2 (1 votes) · LW · GW
I realized it's unclear to me what "trying" means here, and in your definition of intentional alignment. I get the sense that you mean something much weaker than MIRI does by "(actually) trying", and/or that you think this is a lot easier to accomplish than they do. Can you help clarify?
Comment by capybaralet on Disambiguating "alignment" and related notions · 2018-06-10T14:26:06.488Z · score: 2 (1 votes) · LW · GW
It seems like you are referring to daemons.
To the extent that daemons result from an AI actually doing a good job of optimizing the right reward function, I think we should just accept that as the best possible outcome.
To the extent that daemons result from an AI doing a bad job of optimizing the right reward function, that can be viewed as a problem with capabilities, not alignment. That doesn't mean we should ignore such problems; it's just out of scope.
Indeed, most people at MIRI seem to think that most of the difficulty of alignment is getting from "has X as explicit terminal goal" to "is actually trying to achieve X."
That seems like the wrong way of phrasing it to me. I would put it like "MIRI wants to figure out how to build properly 'consequentialist' agents, a capability they view us as currently lacking".
Comment by capybaralet on Disambiguating "alignment" and related notions · 2018-06-10T14:14:01.206Z · score: 2 (1 votes) · LW · GW
Can you please explain the distinction more succinctly, and say how it is related?
Comment by capybaralet on Disambiguating "alignment" and related notions · 2018-06-07T19:35:35.554Z · score: 4 (2 votes) · LW · GW
I don't think I was very clear; let me try to explain.
I mean different things by "intentions" and "terminal values" (and I think you do too?)
By "terminal values" I'm thinking of something like a reward function. If we literally just program an AI to have a particular reward function, then we know that it's terminal values are whatever that reward function expresses.
Whereas "trying to do what H wants it to do" I think encompasses a broader range of things, such as when R has uncertainty about the reward function, but "wants to learn the right one", or really just any case where R could reasonably be described as "trying to do what H wants it to do".
Talking about a "black box system" was probably a red herring.
Comment by capybaralet on Disambiguating "alignment" and related notions · 2018-06-07T18:47:43.057Z · score: 2 (1 votes) · LW · GW
Another way of putting it: A parochially aligned AI (for task T) needs to understand task T, but doesn't need to have common sense "background values" like "don't kill anyone".
Narrow AIs might require parochial alignment techniques in order to learn to perform tasks that we don't know how to write a good reward function for. And we might try to combine parochial alignment with capability control in order to get something like a genie without having to teach it background values. When/whether that would be a good idea is unclear ATM.
Comment by capybaralet on Disambiguating "alignment" and related notions · 2018-06-07T18:43:13.704Z · score: 2 (1 votes) · LW · GW
It doesn't *necessarily*. But it sounds like what you're thinking of here is some form of "sufficient alignment".
The point is that you could give an AI a reward function that leads it to be a good personal assistant program, so long as it remains restricted to doing the sort of things we expect a personal assistant program to do, and isn't doing things like manipulating the stock market when you ask it to invest some money for you (unless that's what you expect from a personal assistant). If it knows it could do things like that, but doesn't want to, then it's more like something sufficiently aligned. If it doesn't do such things because it doesn't realize they are possibilities (yet), or because it hasn't figured out a good way to use it's actuators to have that kind of effect (yet), because you've done a good job boxing it, then it's more like "parochially aligned".
Comment by capybaralet on Amplification Discussion Notes · 2018-06-06T12:06:55.580Z · score: 3 (2 votes) · LW · GW
This is one of my main cruxes. I have 2 main concerns about honest mistakes:
1) Compounding errors: IIUC, Paul thinks we can find a basin of attraction for alignment (or at least corrigibility...) so that an AI can help us correct it online to avoid compounding errors. This seems plausible, but I don't see any strong reasons to believe it will happen or that we'll be able to recognize whether it is or not.
2) The "progeny alignment problem" (PAP): An honest mistake could result in the creation an unaligned progeny. I think we should expect that to happen quickly if we don't have a good reason to believe it won't. You could argue that humans recognize this problem, so an AGI should as well (and if it's aligned, it should handle the situation appropriately), but that begs the question of how we got an aligned AGI in the first place. There are basically 3 subconcerns here (call the AI we're building "R"):
2a) R can make an unaligned progeny before it's "smart enough" to realize it needs to exercise care to avoid doing so.
2b) R gets smart enough to realize that solving PAP (e.g. doing something like MIRI's AF) is necessary in order to develop further capabilities safely, and that ends up being a huge roadblock that makes R uncompetitive with less safe approaches.
2c) If R has gamma < 1, it could knowingly, rationally decide to build a progeny that is useful through R's effective horizon, but will take over and optimize a different objective after that.
2b and 2c are *arguably* "non-problems" (although they're at least worth taking into consideration). 2a seems like a more serious problem that needs to be addressed.
Comment by capybaralet on Disambiguating "alignment" and related notions · 2018-06-05T19:59:10.666Z · score: 9 (2 votes) · LW · GW
This is not what I meant by "the same values", but the comment points towards an interesting point.
When I say "the same values", I mean the same utility function, as a function over the state of the world (and the states of "R is having sex" and "H is having sex" are different).
The interesting point is that states need to be inferred from observations, and it seems like there are some fundamentally hard issues around doing that in a satisfying way.
Comment by capybaralet on Funding for AI alignment research · 2018-06-05T16:07:24.487Z · score: 3 (1 votes) · LW · GW
So my original response was to the statement:
Differential research that advances safety more than AI capability still advances AI capability.
Which seems to suggest that advancing AI capability is sufficient reason to avoid technical safety that has non-trivial overlap with capabilities. I think that's wrong.
RE the necessary and sufficient argument:
1) Necessary: it's unclear that a technical solution to alignment would be sufficient, since our current social institutions are not designed for superintelligent actors, and we might not develop effective new ones quickly enough
2) Sufficient: I agree that never building AGI is a potential Xrisk (or close enough). I don't think it's entirely unrealistic "to shoot for levels of coordination like 'let's just never build AGI'", although I agree it's a long shot. Supposing we have that level of coordination, we could use "never build AGI" as a backup plan while we work to solve technical safety to our satisfaction, if that is in fact possible.
Comment by capybaralet on Funding for AI alignment research · 2018-06-05T16:01:44.240Z · score: 3 (1 votes) · LW · GW
Moving on from that I'm thinking that we might need a broad base of support from people (depending upon the scenario) so being able to explain how people could still have meaningful lives post AI is important for building that support. So I've been thinking about that.
This sounds like it would be useful for getting people to support the development of AGI, rather than effective global regulation of AGI. What am I missing?
## Disambiguating "alignment" and related notions
2018-06-05T15:35:15.091Z · score: 43 (13 votes)
Comment by capybaralet on Funding for AI alignment research · 2018-06-05T14:38:21.600Z · score: 3 (1 votes) · LW · GW
Can you give some arguments for these views?
I think the best argument against institution-oriented work is that it might be harder to make a big impact. But more importantly, I think strong global coordination is necessary and sufficient, whereas technical safety is plausibly neither.
I also agree that one should consider tradeoffs, sometimes. But every time someone has raised this concern to me (I think it's been 3x?) I think it's been a clear cut case of "why are you even worrying about that", which leads me to believe that there are a lot of people who are overconcerned about this.
Comment by capybaralet on When is unaligned AI morally valuable? · 2018-06-05T14:33:53.494Z · score: 3 (1 votes) · LW · GW
It seems like the preferences of the AI you build are way more important than its experience (not sure if that's what you mean).
This is because the AIs preferences are going to have a much larger downstream impact?
I'd agree, but caveat that there may be likely possible futures which don't involve the creation of hyper-rational AIs with well-defined preferences, but rather artificial life with messy incomplete, inconsistent preferences but morally valuable experiences. More generally, the future of the light cone could be determined by societal/evolutionary factors rather than any particular agent or agent-y process.
I found your 2nd paragraph unclear...
the goals happen to overlap enough
Is this referring to the goals of having "AIs that have good preferences" and "AIs that have lots of morally valuable experience"?
Comment by capybaralet on Funding for AI alignment research · 2018-06-04T13:06:14.377Z · score: 3 (1 votes) · LW · GW
Are you funding constrained? Would you give out more money if you had more?
Comment by capybaralet on Funding for AI alignment research · 2018-06-04T13:05:37.898Z · score: 3 (1 votes) · LW · GW
FWIW, I think I represent the majority of safety researchers in saying that you shouldn't be too concerned with your effect on capabilities; there's many more people pushing capabilities, so most safety research is likely a drop in the capabilities bucket (although there may be important exceptions!)
Personally, I agree that improving social institutions seems more important for reducing AI-Xrisk ATM than technical work. Are you doing that? There are options for that kind of work as well, e.g. at FHI.
Comment by capybaralet on When is unaligned AI morally valuable? · 2018-05-29T13:44:29.028Z · score: 3 (2 votes) · LW · GW
Overall, I think the question “which AIs are good successors?” is both neglected and time-sensitive, and is my best guess for the highest impact question in moral philosophy right now.
Interesting... my model of Paul didn't assign any work in moral philosophy high priority.
I agree this is high impact. My idea of the kind of work to do here is mostly trying to solving the hardish problem of consciousness so that we can have some more informed guess as to the quantity and valence of experience that different possible futures generate.
Comment by capybaralet on Soon: a weekly AI Safety prerequisites module on LessWrong · 2018-05-10T23:30:58.394Z · score: 1 (1 votes) · LW · GW
I don't think most places have enough ML courses at the undergraduate level; I'd expect 0-2 undergraduate ML courses at a typical large or technically focused university. OFC, you can often take graduate courses as an undergraduate as well.
Comment by capybaralet on Soon: a weekly AI Safety prerequisites module on LessWrong · 2018-05-07T18:44:44.823Z · score: 1 (1 votes) · LW · GW
There are lots of graduate ML programs that will give you ML background (although that might not be the most efficient route; e.g. compare with Google Brain Residency).
Is there a clear academic path towards getting a good background for AF? Maybe mathematical logic? RAISE might be filling that niche?
Comment by capybaralet on Understanding Iterated Distillation and Amplification: Claims and Oversight · 2018-04-23T21:03:53.443Z · score: 1 (1 votes) · LW · GW
"But I'm not sure what the alternative would be."
I'm not sure if it's what your thinking of, but I'm thinking of “What action is best according to these values” == "maximize reward". One alternative that's worth investigating more (IMO) is imposing hard constraints.
For instance, you could have an RL agent taking actions in $(a_1, a_2) \in \mathbb{R}^2$, and impose the constraint that $a_1 + a_2 < 3$ by projection.
A recent near-term safety paper takes this approach: https://arxiv.org/abs/1801.08757
Comment by capybaralet on China’s Plan to ‘Lead’ in AI: Purpose, Prospects, and Problems · 2017-08-11T04:40:23.729Z · score: 0 (0 votes) · LW · GW
a FB friend of mine speculated that this was referring to alienation resulting from ppl losing their jobs to robots... shrug
Comment by capybaralet on China’s Plan to ‘Lead’ in AI: Purpose, Prospects, and Problems · 2017-08-10T22:26:05.157Z · score: 0 (0 votes) · LW · GW
What is "robot alienation"?
Comment by capybaralet on Counterfactual Mugging · 2017-01-30T18:14:17.104Z · score: 1 (1 votes) · LW · GW
But you aren't supposed to be updating... the essence of UDT, I believe, is that your policy should be set NOW, and NEVER UPDATED.
So... either:
1. You consider the choice of policy based on the prior where you DIDN'T KNOW whether you'd face Nomega or Omega, and NEVER UPDATE IT (this seems obviously wrong to me: why are you using your old prior instead of your current posterior?). or
2. You consider the choice of policy based on the prior where you KNOW that you are facing Omega AND that the coin is tails, in which case paying Omega only loses you money.
Comment by capybaralet on Counterfactual Mugging · 2017-01-30T18:08:34.353Z · score: 0 (0 votes) · LW · GW
Thanks for pointing that out. The answer is, as expected, a function of p. So I now find explanations of why UDT gets mugged incomplete and misleading.
Here's my analysis:
The action set is {give, don't give}, which I'll identify with {1, 0}. Now, the possible deterministic policies are simply every mapping from {N,O} --> {1,0}, of which there are 4.
We can disregard the policies for which pi(N) = 1, since giving money to Nomega serves no purpose. So we're left with
pi_give
and
pi_don't,
which give/don't, respectively, to Omega.
Now, we can easily compute expected value, as follows:
r (pi_give(N)) = 0
r (pi_give(0, tails)) = -1
r (pi_don't(N)) = 10
r (pi_don't(0)) = 0
So now:
Eg := E_give(r) = 0 p + .5 (10-1) * (1-p)
Ed := E_don't(r) = 10 p + 0 (1-p)
Eg > Ed whenever 4.5 (1-p) > 10 p,
i.e. whenever 4.5 > 14.5 p
i.e. whenever 9/29 > p
So, whether you should precommit to being mugged depends on how likely you are to encounter N vs. O, which is intuitively obvious.
Comment by capybaralet on Progress and Prizes in AI Alignment · 2017-01-05T04:56:37.765Z · score: 3 (3 votes) · LW · GW
Looking at what they've produced to date, I don't really expect MIRI and CHCAI to produce that similar of work. I expect Russell's group to be more focused on value learning an corrigibility vs. reliable agent designs (MIRI).
Comment by capybaralet on Value of Information: Four Examples · 2016-09-26T22:48:41.335Z · score: 1 (1 votes) · LW · GW
Does anyone have any insight into VoI plays with Bayesian reasoning?
At a glance, it looks like the VoI is usually not considered from a Bayesian viewpoint, as it is here. For instance, wikipedia says:
""" A special case is when the decision-maker is risk neutral where VoC can be simply computed as; VoC = "value of decision situation with perfect information" - "value of current decision situation" """
From the perspective of avoiding wireheading, an agent should be incentivized to gain information even when this information decreases its (subjective) "value of decision situation". For example, consider a bernoulli 2-armed bandit:
If the agent's prior over the arms is uniform over [0,1], so its current value is .5 (playing arm1), but after many observations, it learns that (with high confidence) arm1 has reward of .1 and arm2 has reward of .2, it should be glad to know this (so it can change to the optimal policy, of playing arm2), BUT the subjective value of this decision situation is less than when it was ignorant, because .2 < .5.
## Problems with learning values from observation
2016-09-21T00:40:49.102Z · score: 0 (7 votes)
Comment by capybaralet on Risks from Approximate Value Learning · 2016-08-30T00:22:45.438Z · score: 0 (0 votes) · LW · GW
It seems like most people think that reduced impact is as hard as value learning.
I think that's not quite true; it depends on details of the AIs design.
I don't agree that "It's likely that all substantially easier AIs are too far from FAI to still be a net good.", but I suspect the disagreement comes from different notions of "AI" (as many disagreements do, I suspect).
Taking a broad definition of AI, I think there are many techniques (like supervised learning) that are probably pretty safe and can do a lot of narrow AI tasks (and can maybe even be composed into systems capable of general intelligence). For instance, I think the kind of systems that are being built today are a net good (but might not be if given more data and compute, especially those based on Reinforcement Learning).
Comment by capybaralet on Risks from Approximate Value Learning · 2016-08-30T00:16:03.587Z · score: 0 (0 votes) · LW · GW
I edited to clarify what I mean by "approximate value learning".
## Risks from Approximate Value Learning
2016-08-27T19:34:06.178Z · score: 1 (4 votes)
Comment by capybaralet on Should we enable public binding precommitments? · 2016-08-23T17:54:56.492Z · score: 0 (0 votes) · LW · GW
People will be incentivized to share private things if robust public precommitments become available, because we all stand to benefit from more information. Because of human nature, we might settle on some agreement where some information is private, or differentially private, and/or where private information is only accessed via secure computation to determine things relevant to the public interest.
Comment by capybaralet on Should we enable public binding precommitments? · 2016-08-23T17:52:42.672Z · score: 0 (0 votes) · LW · GW
Contracts are limited in what they can include, and require a government to enforce them.
Comment by capybaralet on Should we enable public binding precommitments? · 2016-08-23T17:51:45.553Z · score: 0 (0 votes) · LW · GW
Precommitments are more general, since they don't require more than one party, but they are very similar.
Currently, contracts are usually enforced by the government, and there are limits to what can be included in a contract, and the legality of the contract can be disputed.
Binding precommitments would be useful for enabling cooperation in inefficient games: http://lesswrong.com/lw/nv3/inefficient_games/
## Inefficient Games
2016-08-23T17:47:02.882Z · score: 14 (15 votes)
## Should we enable public binding precommitments?
2016-07-31T19:47:05.588Z · score: 0 (1 votes)
Comment by capybaralet on Conservation of expected moral evidence, clarified · 2016-07-30T13:52:00.111Z · score: 0 (0 votes) · LW · GW
"So conservation of expected moral evidence is something that would be automatically true if morality were something real and objective, and is also a desiderata when constructing general moral systems in practice."
This seems to go against your pulsar example... I guess you mean something like: "if [values were] real, objective, and immutable"?
Comment by capybaralet on Notes on the Safety in Artificial Intelligence conference · 2016-07-11T23:40:18.806Z · score: 3 (3 votes) · LW · GW
A few questions, and requests for elaboration:
• In what ways, and for what reasons, did people think that cybersecurity had failed?
• What techniques from cybersecurity were thought to be relevant?
• Any idea what Mallah meant by “non-self-centered ontologies”? I am imagining things like CIRL (https://arxiv.org/abs/1606.03137)
Can you briefly define (any of) the following terms (or give you best guess what was meant by them)?:
• meta-machine-learning
• reflective analysis
• knowledge-level redundancy
Comment by capybaralet on Notes on the Safety in Artificial Intelligence conference · 2016-07-11T21:53:47.621Z · score: 1 (1 votes) · LW · GW
FYI, Dario is from Google Brain (which is distinct from Google DeepMind).
Comment by capybaralet on Yoshua Bengio on AI progress, hype and risks · 2016-04-07T17:52:00.764Z · score: 4 (4 votes) · LW · GW
Comparing with articles from a year ago, e.g. http://www.popsci.com/bill-gates-fears-ai-ai-researchers-know-better, this represents significant progress.
I'm a PhD student in Yoshua's lab. I've spoken with him about this issue several times, and he has moved on this issue, as have Yann and Andrew. From my perspective following this issue, there was tremendous progress in the ML community's attitude towards Xrisk.
I'm quite optimistic that such progress with continue, although pessimistic that it will be fast enough and that the ML community's attitude will be anything like sufficient for a positive outcome.
Comment by capybaralet on Singleton: the risks and benefits of one world governments · 2015-10-21T05:04:00.746Z · score: 1 (1 votes) · LW · GW
I think an important first step should be to try to get a sense of the distribution over possible singletons.
Only then can we have a good idea of where a line of "acceptableness" should be drawn.
Comment by capybaralet on Singleton: the risks and benefits of one world governments · 2015-10-21T05:02:04.415Z · score: 0 (0 votes) · LW · GW
The idea here is to binarize the problem via a definition of acceptability. From this perspective (which, it is argued, would facilitate analysis), the question is not relevant.
I'm not sure if we thinking in terms of acceptable/unacceptable is actually very useful, though...
## A Basic Problem of Ethics: Panpsychism?
2015-01-27T06:27:20.028Z · score: -4 (11 votes)
## A Somewhat Vague Proposal for Grounding Ethics in Physics
2015-01-27T05:45:52.991Z · score: -3 (16 votes) |
# Quadratic fields and solving Diophantine equations
I would like to learn to solve Diophantine equations and I think my next step would be quadratic fields or number fields. What are kind of methods there are to use those on solving equations? And what kind of Diophantine equation can be solved using quadratic fields? Are there just few methods one can list here or should I read some book?
-
You should read a book on algebraic number theory, e.g. Stewart and Tall or Neukirch. – Alex B. Dec 3 '11 at 12:16
Two books that explicitly deal with the connection: Cox's Primes of the Form $x^2 + ny^2$ and Frölich-Taylor's Algebraic Number Theory. – Dylan Moreland Dec 3 '11 at 22:24 |
Typesetting annuity and life-insurance symbols in ConTeXt
How do I typeset annuity and life-insurance symbols, actuarial notation in ConTeXt. I see there are packages available but not for ConTeXt.
Thanks
• It would be nice to have an example of how they look like. I don't know those symbols. – Marco Oct 22 '13 at 10:51
• From what I recall having seen some of the actuarial study material, there's nothing that can't be done using standard maths notation. I think you'll need left sub- and superscripts, and some accents not normally used (\urcorner as an accent on a subscript) – Chris H Oct 22 '13 at 10:58
• for examples, see actuaries.org.uk/research-and-resources/documents/… try starting at p33. – Chris H Oct 22 '13 at 11:00
• In the source code of the Wikipedia page on actuarial notation, the symbol is typeset as follows: a_{\overline{n|}i}. – jub0bs Oct 22 '13 at 12:03
• although unicode recognizes the "actuarial bend" as a character, I doubt it can be easily produced as a single symbol in a font. it certainly isn't acceptably represented by the upper right "quine corner" (\urcorner). – barbara beeton Oct 22 '13 at 12:15
This is Plain TeX, but I guess it can do for ConTeXt. Experts can improve it.
\def\actuarial#1{%
\vbox{
\offinterlineskip
\tabskip=0pt
\mathsurround=0pt
\halign{##&\vrule##\cr
\noalign{\hrule}%
&height 1pt\cr
$\scriptstyle#1$&\cr
}%
}%
}
$a_{\actuarial{n}}$
\bye
• I normally avoid using an explicit \subscriptstyle. It may be better to use \mathpallet to get automatic scaling of the argument. – Aditya Oct 22 '13 at 23:46
• @Aditya I don't know whether ConTeXt has \mathpalette. ;-). However I don't think this symbol is ever used in subscripts. – egreg Oct 23 '13 at 7:18
• ConTeXt has almost all commands defined in plain TeX, so yes it has mathpalette, although the current practice is to use setmathstyle instead (basically assume that over etc will not be used so that the math style is predictable) – Aditya Oct 23 '13 at 14:03
Based on Barbara Beeton's comment, you just need to pick a font that includes the actuarial bend symbol. For example, using XITS fonts you get:
% Use a math font that has the actuarial bend symbol
\usemodule[simplefonts]
\setmathfont[XITS]
\Umathchardef\actuarial "0 "0 "20E7
\starttext
$a_{n \actuarial}$
\stoptext
If someone can tell what is the right mathclass and tex name for this glyph, I can send in a request to add this to char-def.lua so that it works out of the box in ConTeXt.
• The spacing seems wrong. The horizontal bar should not touch the preceding letters. – Marco Oct 22 '13 at 22:28
• @Marco: Tell that to the glyph designer :) (But it could also be something due to incorrect scaling in subscripts) – Aditya Oct 22 '13 at 23:43
• The scaling of the subscripts is correct. This looks OK if a wider glyph is used. – Aditya Oct 22 '13 at 23:51
• so the basic glyph really should be narrower, and horizontally extendable. i'll forward that comment for consideration. – barbara beeton Oct 23 '13 at 7:22
Here's something to get you going, you may want to tweak the raisebox dimension, and the negative spaces, and maybe the size of the \urcorner.
\documentclass{article}
\usepackage{amsmath,amssymb}
\newcommand{\bend}[1]{\smash{#1\!\!\!{\raisebox{-0.2em}{\big\urcorner}}}}
\begin{document}
This is horrible:
$a_{\overline{n|}i}$
This isn't great, but is much better:
$a_{\bend{n} i}$
\end{document}
No doubt someone can propose a cleaner way of doing the adjustments, I can update for comments.
Also it will need testing in ConTeXt - I don't use it, though from what I've read it should work. |
(17 votes, average: 4.41 out of 5)
# Alamouti STBC with 2 receive antenna
by on March 15, 2009
In the past, we had discussed two transmit, one receive antenna Alamouti Space Time Block Coding (STBC) scheme. In this post, lets us discuss the impact of having two antennas at the receiver. For the discussion, we will assume that the channel is a flat fading Rayleigh multipath channel and the modulation is BPSK.
## Alamouti STBC with two receive antenna
The principle of space time block coding with 2 transmit antenna and one receive antenna is explained in the post on Alamouti STBC. With two receive antenna’s the system can be modeled as shown in the figure below.
Figure: 2 Transmit 2 Receive Alamouti STBC
For discussion on the channel and noise model, please refer to the post on two transmit, one receive antenna Alamouti Space Time Block Coding (STBC) scheme.
The received signal in the first time slot is,
$\left[\begin{eqnarray}y_1^1 \\ y_2^1\end{eqnarray}\right] = {\left[\begin{array}{rr}\ h_{11}\ h_{12} \\h_{21}\ h_{22}\end{enarray}\right]}\left[\begin{eqnarray}x_1 \\ x_2 \end{eqnarray}\right]+\left[\begin{array}n_1^1\\n_2^1 \end{eqnarray}\right]$.
Assuming that the channel remains constant for the second time slot, the received signal is in the second time slot is,
$\left[\begin{eqnarray}y_1^2 \\ y_2^2\end{eqnarray}\right] = {\left[\begin{array}{rr}\ h_{11}\ h_{12} \\h_{21}\ h_{22}\end{enarray}\right]}\left[\begin{eqnarray}-x_2^* \\ x_1^* \end{eqnarray}\right]+\left[\begin{array}n_1^2\\n_2^2 \end{eqnarray}\right]$
where
$\left[\begin{eqnarray}y_1^1 \\ y_2^1\end{eqnarray}\right]$ are the received information at time slot 1 on receive antenna 1, 2 respectively,
$\left[\begin{eqnarray}y_1^2 \\ y_2^2\end{eqnarray}\right]$ are the received information at time slot 2 on receive antenna 1, 2 respectively,
$h_{ij}$ is the channel from $i^{th}$ receive antenna to $j^{th}$ transmit antenna,
$x_1$, $x_2$are the transmitted symbols,
$\left[\begin{eqnarray}n_1^1 \\ n_2^1\end{eqnarray}\right]$ are the noise at time slot 1 on receive antenna 1, 2 respectively and
$\left[\begin{eqnarray}n_1^2 \\ n_2^2\end{eqnarray}\right]$ are the noise at time slot 2 on receive antenna 1, 2 respectively.
Combining the equations at time slot 1 and 2,
$\left[\begin{array}{ll}y_1^1 \\ y_2^1 \\ y_1^2^* \\y_2^2^*\end{array}\right] = {\left[\begin{array}{rr}h_{11} & h_{12}\\h_{21} & h_{22}\\h_{12}^* & -h_{11}^*\\h_{22}^* & -h_{21}^* \end{enarray}\right]}\left[\begin{eqnarray}x_1 \\ x_2 \end{eqnarray}\right]+\left[\begin{array}n_1^1\\n_2^1 \\ n_1^2^*\\n_2^2^*\end{array}\right]$.
Let us define $\mathbf{H}= {\left[\begin{array}{rr}h_{11} & h_{12}\\h_{21} & h_{22}\\h_{12}^* & -h_{11}^*\\h_{22}^* & -h_{21}^* \end{enarray}\right]}$.
To solve for $\left[\begin{eqnarray}x_1 \\ x_2 \end{eqnarray}\right]$, we know that we need to find the inverse of $\mathbf{H}$.
We know, for a general m x n matrix, the pseudo inverse is defined as,
$\mathbf{H^+}=(H^HH)^{-1}H^H$.
The term,
$(H^HH) =\left[\begin{array}{cc}|h_{11}|^2+|h_{21}|^2+ |h_{12}|^2 + |h_{22}|^2 & 0 \\ 0 & |h_{11}|^2+|h_{21}|^2+ |h_{12}|^2 + |h_{22}|^2\end{array}\right]$
Since this is a diagonal matrix, the inverse is just the inverse of the diagonal elements, i.e
$(H^HH)^{-1} =\left[\begin{array}{cc}\frac{1}{|h_{11}|^2+|h_{21}|^2+ |h_{12}|^2 + |h_{22}|^2} & 0 \\ 0 & \frac{1}{|h_{11}|^2+|h_{21}|^2+ |h_{12}|^2 + |h_{22}|^2}\end{array}\right]$
The estimate of the transmitted symbol is,
$\hat{\left[\begin{eqnarray}x_1 \\ x_2^*\end{eqnarray}\right]} = (H^HH)^{-1}H^H\left[\begin{array}{ll}y_1^1 \\ y_2^1 \\ y_1^2^* \\y_2^2^*\end{array}\right]$.
## Simulation Model
The Matlab/Octave script performs the following
(a) Generate random binary sequence of +1′s and -1′s.
(b) Group them into pair of two symbols
(c) Code it per the Alamouti Space Time code, multiply the symbols with the channel and then add white Gaussian noise.
(e) Perform hard decision decoding and count the bit errors
(f) Repeat for multiple values of $\frac{E_b}{N_0}$ and plot the simulation and theoretical results.
Figure: BER plot for 2 transmi 2 receive Alamouti STBC
## Observations
1. Can observe that the BER performance is much better than 1 transmit 2 receive MRC case. This is because the effective channel concatenating the information from 2 receive antennas over two symbols results in a diversity order of 4.
2. In general, with $m$ receive antennas, the diversity order for 2 transmit antenna Alamouti STBC is $2m$.
3. As with the case of 2 transmit, 1 receive Alamouti STBC, the fact that $(H^HH)$ is a diagonal matrix ensured that there is no cross talk between $x_1$, $x_2$ after the equalizer and the noise term is still white.
## Reference
A Simple Transmit Diversity Technique for Wireless Communication Siavash M Alamouti, IEEE Journal on selected areas in Communication, Vol 16, No, 8, October 1998
D id you like this article? Make sure that you do not miss a new article by subscribing to RSS feed OR subscribing to e-mail newsletter. Note: Subscribing via e-mail entitles you to download the free e-Book on BER of BPSK/QPSK/16QAM/16PSK in AWGN.
yousif March 15, 2013 at 11:10 pm
hj
Dean88 March 11, 2013 at 3:01 pm
What kind of simulation methods?
monte carlo methods, my guess
Dean88 March 11, 2013 at 3:00 pm
What kind of simulation methods?
Krishna Sankar March 13, 2013 at 5:44 am
@Dean88: monte carlo simulations
krishna March 5, 2013 at 11:07 pm
can you any one provide me algorithm or matlab code for spatial modulation.
Krishna Sankar March 13, 2013 at 6:11 am
@krishna: are you looking for posts on multiple antenna transmission/reception? if so, please checkout
http://www.dsplog.com/category/mimo/
Jeremiah January 14, 2013 at 11:51 am
Hi Krishna,
Do you know how to simulate 4×4 16QAM Symbol Error Rate under rayleigh condition? Please advise.
Regards,
Krishna Sankar January 17, 2013 at 5:33 am
@Jeremiah: For posts on MIMO and BPSK, please check out
http://www.dsplog.com/category/mimo/
chiran January 12, 2013 at 6:05 pm
how can we implement ALamouti Coding (2×1) or (2×2) in mobile WiMAX under different speeds of the subscarriers…?
Krishna Sankar January 17, 2013 at 5:29 am
@chiran: Are you referring to loading independent modulation schemes in different subcarriers. Yes, that is possible.
kiran January 12, 2013 at 11:56 am
Dear Krishna sir,
I want to simulate 2×2 mimo ofdm system. But I am confused whether to perform channel coding(mimo) first or OFDM modulation first,. also where to insert the pilot for channel estimation? after MIMO or after OFDM modulation? Please reply
Krishna Sankar January 17, 2013 at 5:28 am
@kiran:
a) Typically, it is first the MIMO modulation, then OFDM. The model is
y = Hx + n
where
y is 2×1 matrix of received symbol
H is the 2×2 mimo channel
x is the 2×1 matrix of the transmited symbol
n is the 2×1 matrix of noise
b) Pilots need not be MIMO coded, and typically it is not. The pilots are defined in the subcarrier domain.
ankita January 1, 2013 at 8:33 pm
Helo sir plz help with Alamouti STBC with OFDM in presence of carrier frequncy offset with 16QAM and 64 QAM modulation. I have results for QPSK modulation but when I make changes accoring to QAM modulation I do not get desired results…PLZ…PLZ help me to solve this problem.
Thank u
Krishna Sankar January 2, 2013 at 5:36 am
Vishwas December 31, 2012 at 3:32 pm
|r0| =|h00 h01| |s0|
|r1| |h10 h11| |s1|
is the received signal.. We multiply with H-1 to get s0 and s1 back..
we get |h00|2 + |h01|2 + |h10|2 + |h11|2 for matrix A,
what will we get for matrix B??
Krishna Sankar January 2, 2013 at 6:13 am
@Vishwas: Are you looking for H^H, that is the Hermitian i.e. conjugate transpose of the matrix
Vishwas January 16, 2013 at 5:29 pm
Yes, I want the equations for Matrix B.. And also, can you please help me with HARQ. Any links or material regarding it will be very helpful.. I want to know about both types, Chase combining and incremental redundancy..
Krishna Sankar January 17, 2013 at 6:29 am
@Vishwas: Sorry, I have not written anything on HARQ
ankita December 31, 2012 at 10:12 am
Helo sir,
I m working on STBC OFDM. I have used QPSK as my modulation technique. but whenever I change my modulation technique from QPSK to16 QAM or 64 QAM i do not get the desired results. Please help me with this
Krishna Sankar January 2, 2013 at 6:14 am
@ankita: For posts on general M-QAM modulation with OFDM, please check out
http://www.dsplog.com/2012/01/01/symbol-error-rate-16qam-64qam-256qam/
Tanbir December 3, 2012 at 3:26 pm
Dear sir,
This is my first time on your blog…
I am studying in UM(Malaysia), Working on a project on MIMO-OFDMA
I already have studied (I think) 9 hours at a stance, What you have created is a blessings for the new researchers,
My question is about the receiving section,
Can you tell me how the Parallel data is being handled at the Rx?
For an Example;
If I tried with a 2×2 antenna, (SIBC OFDMA), After getting the signal into the receiver,(Being equalized) how i am gonna recover my original data? can you explain or give me any ref??
Thanking you,
Tanbir
Krishna Sankar December 4, 2012 at 6:18 am
@Tanbir: One will have a buffer to convert the parallel data to serial and use. Check out the OFDM related posts @ OFDM http://www.dsplog.com/category/ofdm/
Rajopadhhay November 25, 2012 at 11:46 pm
Dear Mr. Krishna,
If we use a simple AWGN channel model, instead of considering it to be Rayleigh fading then does that mean that the channel coefficients h1 and h2 equal 1? Since we are only adding noise? I dont know if I am asking a stupid question but it would nice if you could let me know.
Best,
Raj
Krishna Sankar November 27, 2012 at 4:51 am
@Rajopadhhay: Yes, assuming h1=h2=1 will take the Rayleigh fading out of the way
Jyothi Swaroopa November 9, 2012 at 2:22 pm
Sir,
I have been working on paper “improved interference cancellation scheme for two-user detection of alamouti code” and I derived all the required mathematical equations and proved for 2*2, 2*3, 2*m but i could’nt do the simulation please help me if you can…
Krishna Sankar November 12, 2012 at 6:58 am
@Jyothi: I wont be able to help in the coding part. Some quick pointers to help in debugging the code will be:
a) Make sure that the decoding has zero errors in no-noise case
b) Use BPSK modulation
urvik November 8, 2012 at 8:25 pm
Hello sir,
can you please explain cahnnel estimation in OFDM with providing matlab code..?
Krishna Sankar November 18, 2012 at 7:13 am
@urvik: The channel estimation in typical ofdm systems is relatively simple and is done with the aid of a known preamble.
The system model for each subcarrier is,
Y = HX + N, where
Y – is the received symbol
H – is the unknown channel
X – is the known preamble and
N – is the noise
The noisy estimate of channel H’ = Y/X
maharshi November 7, 2012 at 3:43 pm
hello sir,
I am right now developing alamouti 4×4 scheme. So, i want to develope the combiner output for that. So, i need help that, whether i have to develope the combiner output equations for 4×4 as same manner or as different way. I need one more help about the internal discription about combiner circuit and channel estimator in alamouti.
Krishna Sankar November 12, 2012 at 7:03 am
@maharshi: what is the coding scheme for the 4×4 channel which you are using?
maharshi November 12, 2012 at 8:10 am
sir, i am doing for alamouti STBC. So, i am following same theory of 2×2, which is extended upto 4×4. And transmitting matrix is (s1 s2 s3 s4 ; s2 -s1* s4 -s3*; and two other rows). But now i am not able to produce the output of the combiner.
Krishna Sankar November 18, 2012 at 7:11 am
@maharshi: For debugging, try to build the decoder with no noise.
maharshi November 19, 2012 at 6:33 pm
ok, so i need to take y=H*x.or i am not able
to get what u want to say???
Aditi October 30, 2012 at 8:46 am
can u plz help me out wid Alamouti OFDM with 16QAM and 64QAM modulation……
Krishna Sankar November 2, 2012 at 6:40 am
ari November 2, 2012 at 7:46 am
@Mr.Krishna Sankar: I modified my codes and combined to 16QAM concept (yours also), but the SER is not right. Which part that I need to concern about if I want to obtain a proper result? Thank you.
Krishna Sankar November 12, 2012 at 7:09 am
@ari: Make sure that the noise addition is correct. And the also the symbol to bit decoding.
ashu October 24, 2012 at 11:27 pm
sir you are right the concepts of MIMO decoding is not getting affected by OFDM……………..But the problem with channel model in your code you take a channel with N coefficient but in practical case we have a channel of 3,5 or 10 tap, if channel have N coefficient so its becomes a frequency selective channel …so the channel model is very within in one ofdm symbol……(Plz help me sir)
Krishna Sankar October 25, 2012 at 5:21 am
@ashu: Well, even if the channel has multiple taps, as long as the tap duration is lesser than the cyclic prefix, the channel coefficient for each subcarrier can be treated as an independent flat fading. Have discussed this for a SISO rayleigh channel + OFDM case @
http://www.dsplog.com/2008/08/26/ofdm-rayleigh-channel-ber-bpsk/
ashu October 21, 2012 at 9:48 pm
First of all I would like to say thanks for such wonderful blog….sir presently i am working on MISO OFDM system,,sir in your blog you discuss about ofdm and mimo separately but not combined can you do so….
Krishna Sankar October 24, 2012 at 7:53 am
@ashu: Thanks.
The underlying concepts of MIMO decoding is not getting affected by OFDM. For OFDM related posts, please check out
http://www.dsplog.com/category/ofdm/
Aditi October 13, 2012 at 11:24 pm
hello sir,
can u plz help out regarding the theoritical calculaton(BER) simulation for Almouti STBC with CFO…
thnx
Krishna Sankar October 17, 2012 at 6:20 am
@Aditi: Sorry, have not looked into that aspect
MINAL October 13, 2012 at 1:09 am
Sir can we make this Alamouti STBC coding for receiver antenna selection for MIMO OFDM?
& Can u please tell me what are these -x2* & x1*?
Krishna Sankar October 17, 2012 at 6:20 am
@MINAL: For OFDM related posts, please checkout http://www.dsplog.com/category/ofdm
Did you want to do receiver selection along with Alamouti STBC?
anusha November 5, 2012 at 4:42 pm
i want information about sfbc mapping and demapping. any one pls help me.
priya September 26, 2012 at 4:56 am
Hi sir,
First of all I would like to say thanks. bcz your coding 2×1 is more helpful for me.
Thank you sir,……
sir i need one help.. i need coding for 4×1, 3×1… i tried many time by change the coding, but i still getting error… so plz help me sir
Krishna Sankar September 26, 2012 at 5:23 am
@priya: Thanks. what is the code which you are using for 4×1, 3×1 cases? Can you please point me to that.
Further, are you getting zero error in the no noise case?
benza September 16, 2012 at 5:06 pm
can you explain me about 4×4 mimo
thanx
Krishna Sankar September 18, 2012 at 5:41 am
@benza: I have not gone through the 4×4 mimo case, but you can find posts on the 2×2 MIMO with different equalizer structures in
http://www.dsplog.com/tag/mimo/
Hope this helps.
mina August 21, 2012 at 9:59 am
hi
plz help me for MATLAB. can u?
Krishna Sankar August 22, 2012 at 8:26 am
@mina: Hmm… you can post your query. will try to help.
Reetu June 24, 2012 at 4:29 pm
thanks
Krishna Sankar June 26, 2012 at 6:18 am
@Reetu: I do not have posts discussing OFDM and STBC combined, but have
http://www.dsplog.com/tag/ofdm
http://www.dsplog.com/tag/stbc
LEETHENG May 13, 2012 at 1:51 pm
Dear Krishna,
If i suppose there is feedback channel so reciever tells the transmitter the channel estimation,, then transmitter would do forward error correction ,this could be implemented if i multuplied modulated bits by 1/rayleigh fading. but when i did that, i didn`t get the plot. any suggestions about the reason?
Krishna Sankar May 15, 2012 at 5:51 am
@LEETHENG: Hmm… are you getting zero ber even in the case of no noise?
Also, try looking at http://www.dsplog.com/2009/04/13/transmit-beamforming/
Venkatraman April 27, 2012 at 7:54 pm
Hello sir,
In Virtual MIMO I read that one mobile shares the transmitter in other mobiles so that it looks as if it has many antennas. But the thing is how will the neighboring mobile know of my data? For eg: if x1 and x2 are my data and in the first time slot I send x1 in my mobile transmitter and x2 in my neighboring mobile transmitter how will my neighbor have that data x2?
Krishna Sankar May 2, 2012 at 4:53 am
@Venkataraman: Hmm… I have not read about the concept of virtual MIMO. Sorry
Jayashree February 28, 2012 at 7:14 pm
Hi Krishna
i want to implement Alamouti scheme with 2 transmit and one receive antenna using following equation for SINR = h112 +h122E (x12)/Sn2+h1k2.E(Xi2).
Krishna Sankar March 5, 2012 at 5:29 am
Amin Mehul November 11, 2011 at 4:05 pm
I want the transmission matrix for 2-transmit & 2-recieve antennas. Is there any mathematical procedure to get it?
Krishna Sankar November 15, 2011 at 5:37 am
@Amin: Did you want to check posts in http://www.dsplog.com/tag/mimo?
WASEEM August 31, 2010 at 3:23 pm
Hello Sir
Thanks for this info, it has really been helpful.
its simulation is in matlab can it be done matlab simulink or it is done in this blog
plz let me know sir.
Krishna Sankar September 1, 2010 at 7:02 am
@WASEEM: The code in the blog is in Matlab. Ofcourse, it can be done in Simulink (I do not not have simulink).
WASEEM September 2, 2010 at 2:58 pm
ok
Pattaraporn August 8, 2010 at 11:56 am
Hello Sir,
I will write a program for 2×2 spatial Multiplexing and space time coding with 4-qam 16-qam modulation respective. But do not know to start it. I would like you suggest.
Krishna Sankar August 10, 2010 at 5:07 am
@Pattaraporn: Try to see if the code present in the following links help
http://www.dsplog.com/tag/mimo/
http://www.dsplog.com/tag/alamouti/
Anchin June 27, 2010 at 10:37 pm
Hello Sir,
Thanks for this forum, it has really been helpful.
Please I would like to clarify something.
When you say
The estimate of the detected symbol is,
(x1,x2*) i was thinking why x2*, why not just (x1 and x2)
Krishna Sankar June 28, 2010 at 6:20 am
@Anchin: The x2* conjugate came up because it mode it more convenient to represent it in matrix notations.
dolly April 16, 2010 at 1:10 am
hello sir,
why taps are used for OFDM in rayleigh channel with BPSK modulation? Is STBC with OFDM is same as STBC with MCCDMA
Krishna Sankar April 18, 2010 at 2:18 pm
@dolly: My replies
a) It depends on the scenario which one wishes to simulate. I wanted to simulate a multipath channel case
b) Thats a difficult question to answer. Ofcourse the simplest answer is no, however there can be finer shades of gray depending on what you are trying to achieve with these two technologies.
RP Singh April 9, 2010 at 1:51 pm
I have a problem regarding a data transmission using SFBC-OFDM.
How this will be performed.
WiMAX March 11, 2010 at 3:08 am
Hi Krishna, here is my question.
How did you combine the received signals?
Have you used MRC at the receiver?
I have a system of 2×2 with OFDM-STBC but i can’t understand how to sum up all the signals at the receiver side. After applying FFT at the receiver should i apply alamouti STBC and then MRC?
Any ideas?
Krishna Sankar March 28, 2010 at 3:59 pm
@WiMax: The equation discussed in this post should hold good. Try formulating the problem as described above and then apply (H^HxH)^-1x H^H as the equalization matrix
WiMAX February 17, 2010 at 3:12 am
Hi Krishna, i have the same question with wap.
Can you guide us?
Krishna Sankar April 4, 2010 at 3:52 am
@WiMAX: What is WAP?
niks February 4, 2010 at 2:51 pm
hiii
in alamouti matrix why complex conjugate is taken for complex signal??
any technical reason????
dilla February 18, 2010 at 7:08 am
how to put the correlation at the transceiver or receiver…where should i put the function? plizzz help me..
Krishna Sankar March 31, 2010 at 5:43 am
@dilla: Are you talking about antenna correlation?
Krishna Sankar April 4, 2010 at 4:23 am
@niks: It makes the channel ortohogonal
wap January 13, 2010 at 11:15 am
hi sir………….
hi krishna……….
how to combine stbc+mimo or stbc+ofdm??if it could??
you”ve tried it??where i can get tutorial about it??
in alamouti stbc with 2 antenna transmitter and receiver, if I want to add an antenna to be many, for example, 1 transmitter with many receiver or transmitter and receiver of many, which one should be changed in your program?
thanks before…………..
norbert January 5, 2010 at 3:35 pm
Hi,
My question would be that why there is no transmission power in the received signal equations? Or are the signal powers assumed to be one on all transmit antennas? Or are they comprised in the H channel matrix?
ehab December 26, 2009 at 1:45 am
Dear Sir;
I just want to know why you change the matlab code for making the Alamouti STBC from the previous (Alamouti STBC) you used
“sCode = zeros(2,N);
sCode(:,1:2:end) = (1/sqrt(2))*reshape(s,2,N/2); % [x1 x2 ...]
sCode(:,2:2:end) = (1/sqrt(2))*(kron(ones(1,N/2),[-1;1]).*flipud(reshape(conj(s),2,N/2)));”
and now you use
“sCode = 1/sqrt(2)*kron(reshape(s,2,N/2),ones(1,2));”
Thank you
Regards
Dobs December 19, 2009 at 4:38 pm
Thank you for the post, i was going to ask you..what if we transmit the same symbol at the same time on the two different antennas,i.e. tranmitting s0 and s0 instead of s0 and s1 in 2tx and 1rx alamouti scheme?please can you explain what happens in this case?
Thank you,
Krishna Sankar December 23, 2009 at 5:43 am
@Dobs: If we transmit the same symbol on both the antennas, then there will be no diversity gain. You can see a brief discussion on this in the post on beamforming
http://www.dsplog.com/2009/04/13/transmit-beamforming/
apanong December 11, 2009 at 1:15 pm
I got BER as a line at 10^-0.6 with no noise and the channel taps as unity. so do u know what is my problem?
Krishna Sankar December 22, 2009 at 5:18 am
@apanong: hmm… getting 25% error in ideal condition is not desirable, though I am unable to guess what is wrong. Are you using BPSK modulation?
apanong December 7, 2009 at 4:07 am
hi,
I’m trying with Alamouti 2*2 and QPSK modulation. I used your code, but change at the part of creating signal and the part of counting the errors to suitable to QPSK. I think the channels and the noise are still the same, rite?
but i could not get the result.
don’t know why.
could u give me some suggestions?
Krishna Sankar December 7, 2009 at 5:40 am
@apanong: Are you getting zero BER with no noise and the channel taps as unity?
wosamw November 23, 2009 at 9:46 pm
hi
ok but how can implemention ofdm and ofdma in STBC withot discuss about 802.16e
thanks
martin November 15, 2009 at 3:20 am
thanks for your good website and codes, If I want to use lognormal distribution fading channel instead of rayleigh how do I generate the lognormal fading symbols?
Krishna Sankar December 3, 2009 at 5:37 am
@martin: I have not discussed log-normal distribution in posts till date. Hope the wiki entry helps you
http://en.wikipedia.org/wiki/Log-normal_distribution
wosamw November 3, 2009 at 2:04 am
hi
my dear
Krishna Sankar
can you provide me simulink model for STBC MIMO OFDM in wimax IEEE802.16e
thanks
Krishna Sankar November 8, 2009 at 8:40 am
@wosamw: Sorry, I have not discussed STBC used in 802.16e
sotiiis October 17, 2009 at 8:14 pm
Hallo Krishna ,first of all congradulations for your posts,are really enlightening.One thing i cant understand is the calculation of H(hermitian)*H which is critical to show diversity of 4 for this alamouti scheme,Could you be more detailed and show this calculation..thnks!!!!
Krishna Sankar October 27, 2009 at 5:08 am
@sotiiis: One way to look at that is:
If you see the y = Hx + n equation, we can see that we have four copies of x1 and four copies of x2 at the receiver. Since the code is orthogonal, they do not interfere with each other. Hence the diversity order is four.
Does this help?
najat yahya October 6, 2009 at 5:03 am
Dear kreshna:
I am najat i work on OFDM.I know the bit error rate increase when the channel is frequency selective fading more than flat fading channel by using any modulation techniques like binary phase shift key in transmission the data. Can you help me writing a program by MATLAB show that.
I also need to show that OFDM is effective technique in case frequency selective fading channel because it improves the bit error rate.
Krishna Sankar October 8, 2009 at 5:29 am
@najat: Please refer to the post
http://www.dsplog.com/2008/08/26/ofdm-rayleigh-channel-ber-bpsk/
It shows that BER for OFDM with a 10 tap Rayleigh channel is equivalent to flat fading case.
najat shalash October 6, 2009 at 5:01 am
Dear kreshna:
I am a student work on OFDM.I know the bit error rate increase when the channel is frequency selective fading more than flat fading channel by using any modulation techniques like binary phase shift key in transmission the data. Can you help me writing a program by MATLAB show that.
I also need to show that OFDM is effective technique in case frequency selective fading channel because it improves the bit error rate.
WiMAX October 4, 2009 at 10:24 pm
Thanks Krishna, i will work on it!
praneeth September 30, 2009 at 10:32 pm
hi sir,
Is there any ieee (or any conference) papers published on this work?
Krishna Sankar October 1, 2009 at 5:32 am
@praneeth: The contents are discussed in text books, so am sure that IEEE papers should be available. However, I have not done the search and hence unable to provide you pointers.
WiMAX September 19, 2009 at 5:12 pm
Hi Krishna, i’m working on Diversity exploitation in MIMO-OFDM (using STBC). I want to make a simulation with the help of Matlab of 2×2 system.
Any ideas or help regarding this subject?
Can i use OFDM instead of BPSK and if yes how can i do that?
Krishna Sankar October 1, 2009 at 4:53 am
@WiMAX: Yes, you can use BPSK sent over OFDM. You may look at an example OFDM BER simulation in Rayleigh channel @
http://www.dsplog.com/2008/08/26/ofdm-rayleigh-channel-ber-bpsk/
mak_m September 9, 2009 at 3:15 pm
thanks very much ..this post is very helpful can u plz tel me the name of the book ur following ..so i can read about it in much detail as well as if i need i can reference it in my report.. can i get access that book online for free..
surbhi August 24, 2009 at 12:01 pm
hi krishna,
u r doing very good job…
i have a one doubt plz find some time to clarify that.
tell what is the physical significance of using “KRON” that is based on the kronecker product while implementing ALMOUTI
surbhi
Krishna Sankar August 25, 2009 at 5:41 am
@surbhi: kron does not have any physical significance in understanding Alamouti STBC. However, I used it to make the matlab code run faster by performing matrix operations (instead of for loops).
Ustun July 3, 2009 at 3:20 am
Thanks for the great website, it is very helpful.
I observe that if we use only AWGN channel, and omit the Rayleigh channel, Alamouti scheme yields
- about 3dB better for 2×2 case compared to SISO,
- but yields the same performance as SISO for 2×1 case.
Could you comment on why that happens?
Krishna Sankar July 6, 2009 at 5:27 pm
@Ustun: By AWGN channel for 2×2 MIMO case, I believe you used a diagonal channel, right? That is equivalent to having two single channel SISO case.
One question: How did you make a 2 transmit, 1 receive AWGN channel?
mohammed June 4, 2009 at 5:41 pm
how can i calculate the transmitted power while using 2*2 Alamouti scheme. where BW, data rate,Path loss and distance between transmitter and receiver is given,,,,,
is there any formula available to calculate that,,,
Krishna Sankar June 7, 2009 at 2:17 pm
@mohammed: Well, the transmit power is something which is controlled by power amplifier in the transmitter. I am guessing that your question is, given the value of transmit power, path loss, distance etc, how can i calculate the power received at the receiver. For computing the received power, you may refer to the path loss equations @
http://en.wikipedia.org/wiki/Log-distance_path_loss_model
Hope this helps.
Ken May 25, 2009 at 7:39 pm
Hello there!
Great codes! Is there a way to calculate the theoretical value of BER VS SNR for the 2X2 Alamouti Scheme?
Thanks alot!
Krishna Sankar May 31, 2009 at 8:19 pm
@Ken: Well, I would expect that Alamouti with 2 receive antennas will perform 3dB poorer than 1-transmit 4-receive Maximal Ratio Combining case. Agree?
fof May 20, 2009 at 5:28 pm
would you help me??
i need a very simple code which uses only for loops or while loops….for alamouti STBC (likelihood detection)….for BPSK
fof May 20, 2009 at 5:24 pm
i need a very simple code which uses only for loops or while loops for alamouti stbc for BPSK likehood…..not with hard decision…..sorry if i’m bothering you….please help me……thanx
Krishna Sankar May 22, 2009 at 5:28 am
@fof: I think, it should be reasonably easy for you to modify the current code and make it into for-loops.
fof May 23, 2009 at 12:46 pm
ok thanx alot……it’s done )
SANTOSH April 22, 2010 at 5:56 pm
@fof: can you give the code for alamouti stbc for BPSK likehood which uses only loops
Marcus May 1, 2009 at 12:40 pm
what if it’s theoretical BER for 2 rx antennas with STBC? what is the expression. Thank you.
Krishna Sankar May 12, 2009 at 4:47 am
@Marcus: Am just guessing…. I would think that the theoretical BER for Alamouti STBC with 2 rx antennas will be 3dB poorer than the theoretical BER for 1 transmit 4 receive MRC case. Do you agree?
jefferson April 27, 2009 at 8:10 am
why no titles for STBC with
1) 4 or more transmitters
2) QPSK coding method
exists~?
UP April 26, 2009 at 6:35 am
What is the expression for theoritical BER for 2 Rx antennas?
Krishna Sankar April 30, 2009 at 5:23 am
@UP: Did you mean theoretical BER for 2 rx antennas with STBC or with MRC only. If it’s MRC alone, you may refer to the post
http://www.dsplog.com/2008/09/28/maximal-ratio-combining/
mimo April 26, 2009 at 12:29 am
Hello,
I am using real measures of h AND stbc and see that the channel (h) doesn’t influence in the BER. The simulation SNR-BER is always the same. Why is it??
thanks you.
Krishna Sankar April 30, 2009 at 5:18 am
@mimo: In this simulation, we are assuming independent Rayleigh fading channel and the channel remains the same for two symbols. What’s your assumption on the channel model?
MIMO May 3, 2009 at 11:59 pm
I have measures from a tunel and I have simulated for SNR =10dB the grafic BER-DISTANCE (50-500Metres). This figure is plane, that is to say that the channel doesn’t influence in Alamouti 2×2. Is it because is orthogonal??
thanks you.
Krishna Sankar May 12, 2009 at 5:02 am
mimo April 25, 2009 at 10:58 pm
Hello,
why do you multiply for 1/sqrt(2) in the line…
“sCode = 1/sqrt(2)*kron(reshape(s,2,N/2),ones(1,2)) ;”??
Thanks you.
Krishna Sankar April 30, 2009 at 5:17 am
@mimo: To make the total transmit power from both the antenna to be equal to 1.
MIMO April 30, 2009 at 6:30 pm
hello,
I’m thinking that why don’t you multiply for this factor in your post “MIMO with MMSE equalizer” in VBLAST??
thanks you.
Krishna Sankar May 12, 2009 at 4:45 am
@MIMO: Well, in the MIMO case, we have two transmit streams and the time duration to send N bits is reduced by 2. Hence we do multiply by 1/sqrt(2). Agree?
Solo April 13, 2009 at 7:22 pm
dear sir,
I have given a task to be done in just three days. Please can you help me doing it as it will be helpful to understand all the subject. apprecuated and here is all the question.
MIMO-OFDM: VBLAST versus STBC
The objective here is to compare VBLAST and Alamouti STBC in the context of MIMO-OFDM operating over frequency-selective Rayleigh fading channels. Consider a 2×2 system with N=64 carriers and a cyclic prefix long enough to avoid interblock interference. QPSK is used in STBC and BPSK is used in V-BLAST in order to have the same spectral efficiency. The discrete-time channels are assumed to be mutually uncorrelated and have L taps each. The taps are uncorrelated and obey an exponential power delay profile, i.e. E{|hk|2} = C.exp(- k) where C is a constant; take =0.2. It is assumed that the channel does not vary over two OFDM symbols. Assuming perfect knowledge of the channels at the receiver, provide simulations results depicting the average BERs for the two systems versus the average SNR in the cases where L=1, L=4, L=8, and L=16. Comment on the obtained results.
Krishna Sankar April 16, 2009 at 5:46 am
@Solo: I do not have the precise simulations which you were looking for, but I do have articles on MIMO and on STBC using BPSK on a flat fading Rayleigh channel. It should be reasonably easy for you to adapt to the OFDM case.
http://www.dsplog.com/2008/10/24/mimo-zero-forcing/
http://www.dsplog.com/2008/10/16/alamouti-stbc/
Hope this helps. Good luck.
Student A April 8, 2009 at 6:28 am
Hi Krishna,
Any suggest on how to combine MIMO with OFDM?
Krishna Sankar April 11, 2009 at 6:46 am
@student: extending a single antenna OFDM multiple antenna is reasonably straight forward. At the transmitter, we should have nTx iFFT’s and at the receiver we should have nRx FFT’s. Ofcourse, for each tx/rx chain, we should have the other miscellaneous blocks like filters, cyclic prefix insertion etc.
sasmita March 30, 2009 at 10:31 am
i want to simulate the SNR vs frequency offset in ofdm.krishna plz help me.
Krishna Sankar April 4, 2009 at 4:22 pm
@sasmita: I believe you want to calculate the SNR degradation due to introduction of the frequency offset. A probable quick way is as follows:
(a) Define modulation values for subcarriers in frequency domain X(F). Create the time domain symbol (x) by using IFFT.
(b) Introduce frequency offset at by multiplying x(t) with exp(j*2*pi*f_d*t), where f_d is the frequency offset, y(t) = x(t).*exp(j*2*pi*f_d*t)
(c) Take FFT of y(t) to find Y(F). Find the absolute difference between X(F) and Y(F)
Error Vector Magnitude, EVM = mean(|X(F) – Y (F)|^2
EVM, dB = 10*log10(EVM).
And EVM is good indication of the SNR degradation. Hope this helps.
SUCHITRA March 19, 2009 at 6:47 pm
how u r getting theory results? any proof for formulae u have been used?
Krishna Sankar March 21, 2009 at 4:47 pm
@SUCHITRA: For the theoretical results on 1 transmit, 1 receive Rayleigh channel, you may refer to the posts
(a) http://www.dsplog.com/2008/08/10/ber-bpsk-rayleigh-channel/
(b) http://www.dsplog.com/2009/01/22/derivation-ber-rayleigh-channel/
For the results with with Ntx=1, NRx=2 MRC, I used the equations provided in the textbook. The simulations are provided in
(c) http://www.dsplog.com/2008/09/28/maximal-ratio-combining/
And ofcourse, Ntx=2, NRx=1 Alamouti STBC is 3dB poorer than the 1Tx x 2Rx MRC case.
(d) http://www.dsplog.com/2008/10/16/alamouti-stbc/
Hope this helps.
mustafa April 13, 2009 at 4:43 pm
hi Krishna Pillai.
i want thank u in the first about this info.second i want to know about modulation and De-mod 16-PSK. if u ca help me plz.
Regards
Krishna Sankar April 16, 2009 at 5:43 am
@mustafa: Please refer to the post on Symbol error rate for 16PSK
http://www.dsplog.com/2008/03/18/symbol-error-rate-for-16psk/
Joe March 16, 2009 at 9:41 pm
1. h11,h21,h12,h22, are independent fadings, each of one as a rayleigh distribution variable.
2. Consider 2 independent systems:
a. (TX#1 and TX#2) and RX#1 with h11 h21 fadings.
b. (TX#1 and TX#2) and RX#2 with h12 h22 fadings.
Each of both systems are treated like the simple 2×1 Alamouti case including the detection and symbol obtaining (yHat1 and yHat2)
3. The detection rule for MRC and Alamouti are the same (the estimated symbol equation).
With all this considerations i think is correct to take the sum of yHat1 and yHat2 (obtaining maximal ratio combining) and then apply the “hard decision decoding”.
Krishna Sankar March 21, 2009 at 8:57 am
@Joe: hmm… i had thought about it (infact, i recall your earlier comment to Alvina suggesting the same). However, I think that approach is not the optimal way to combine the information from two receive antennas.
I think what you have proposed is like equal gain combining, where as what is discussed in this post is like maximal ratio combining. I would think maximal ratio combining provides better performance. Do you agree? Kindly share your thoughts.
Alvina March 16, 2009 at 7:21 pm
there is a small errror, at network daigram both channels are marked as h21, wheres as one is h12.
Krishna Sankar March 21, 2009 at 8:51 am
@Alvina: Thanks. Indeed, it was a typo. I corrected and uploaded the new figure.
Alvina March 16, 2009 at 6:55 pm
very nice and comprehensive. |
# Does ZFC permit or acknowledge the existence of infinite sets that are uncountable?
Does ZFC permit or acknowledge the existence of infinite sets that are uncountable?
By saying infinite sets that are uncountable, I mean that the cardinality of the sets is uncountable (that is $\ge\aleph_1$)
Thanks.
-
ummm... Yes? Uncountable sets exist in ZFC. – Dustan Levenstein Feb 11 '12 at 19:10
@user24796: Easily. The axiom of infinity tells us there is an infinite set $X$, the powerset axiom tells us it has a powerset $\mathscr{P} X$, and Cantor's diagonal argument tells us that $\mathscr{P} X$ has cardinality strictly greater than $X$. – Zhen Lin Feb 11 '12 at 19:13
"Uncountable" means bigger than $\aleph_0$, not bigger than $\aleph_1$. The latter number is by definition the cardinality of the set of all countable ordinals, which is an uncountable set. – Michael Hardy Feb 11 '12 at 20:47
@Michael Hardy: The question was bigger than or equal to $\aleph_1$. This is the definition of uncountable in ZFC (where as in ZF it only means "not smaller or equal than $\aleph_0$"). – Asaf Karagila Feb 11 '12 at 23:11
Cantor's theorem tells us that if $X$ is a countably infinite set then $P(X)$ is not countable.
In ZFC we assert that if $x$ is a set then $P(x)$ is a set as well, so if you assume the axiom of infinity which asserts that indeed an infinite set exists then there exists an uncountable set as well.
There is more to that as well: How do we know an $\aleph_1$ exists at all?
-
but according to Wikipedia, it says that real numbers cannot be characterized in first-order logic alone. [en.wikipedia.org/wiki/Real_number#Advanced_properties] and I am little confused... – user24796 Feb 12 '12 at 14:36
The real numbers cannot be characterized in FOL in the language of rings (or fields), however within set theory you can define high-order structures for other languages. Furthermore, even if the real numbers cannot be characterized in first order logic, they are still a model of the theory. – Asaf Karagila Feb 12 '12 at 15:05
@user24796: To say that the real numbers cannot be characterized in FOL is to say that you cannot have a theory that every model which satisfies this theory is exactly the real numbers (namely complete and Archimedean field). – Asaf Karagila Feb 12 '12 at 15:08
$\mathbb{R}$ and $\mathbb{C}$ are constucted within ZF (so ZFC)
The set of real numbers $\mathbb{R}$ spring to mind. Another example is the power set of the natural numbers (which exists by the power set axiom of ZFC), and that is uncountable by Cantor's theorem. |
# Paths on a Grid
## Paths on a Grid
Below is an 8 by 8 grid. Point A is on the square at the top left corner, and point B is on the square of the bottom right corner.
If you can only move down and to the right, how many different paths exist between A and B? Here are three of the possible paths as example;
Let's have a closer look at these paths:
• How many right moves and how many down moves are there in each path?
• Does the order of these right and down moves make any difference for reaching point B?
Try drawing more paths as needed:
In fact, every successful path for an 8x8 grid will involve exactly 7 moves down and 7 moves to the right. The number of decisions to select the right or the down path to go will determine the total number of paths.
## A Solution Using Pascal's Triangle
On the other, you may want to study this problem by creating smaller squares.
There is only one unique path from A to C. Likewise, there is only one path from A to D.
We may indicate this by placing a 1 in those squares.
Now we label B as the square in the second row and second column.
There are two unique paths from A to B, so we write 2 in this square.
We have filled the 2x2 square.
Now we can try a 3x3 square. We already knew about C,D, and F.
You can only form one path to E and H too.
Try to find the number of paths from A to G and A to I.
There are three paths for both and finally, we can look for to bottom left corner.
Let's find the total number of paths from A to B in a 3x3 square.
You may draw different sizes of squares and continue calculating the number of unique paths that lead to each square in the grid.
Use the canvas here to work on the smaller squares;
Fortunately, the pattern is quite familiar to us - Rows of Pascal Triangle!
## A Solution Using Counting Techniques
Since all the paths must consist of a total of 14 moves, 7 down and 7 right, our job is to select the right move from the collection of 14 moves. You can draw smaller triangles and count the number of paths as a way to help see this pattern
We can use counting techniques to calculate how many different ways we can select this right move. You may be familiar with the notation $C(14,7).$ This a way to represent the number of ways to select 7 objects from a set of 14.
$C(14,7) = 3432$
14 Choose 7 = 3432
There are 3432 unique paths between A and B.
## Extension
• Find out how many possible paths exist between A and B by only going right and down directions in a "n x n" square grid.
• Find out how many possible paths exist between A and B by only going right and down directions in a "n x m" rectangular grid. |
# Fourier transform isometry
1. Jun 3, 2009
### creepypasta13
let S(R) be the schwartz space, M(R) be the set of moderately decreasing functions, F be the fourier transform
Suppose F:S(R)->S(R) is an isometry, ie is satisfies ||F(g)|| = ||g|| for every g in S(R).
How is it possible that there exists a unique extension G: M(R)->M(R) which is an isometry, ie a function G: M(R)->M(R) so that for any g in S(R) we have G(g) = F(g), and for any g in M(R) we have ||G(g)|| = ||g||.
2. Jun 4, 2009
### maze
What is the space of moderately decreasing functions?
The way you do it for L2 is to extend F:S->S by density of S in L2.
3. Jun 4, 2009
### creepypasta13
extend it by density? what does that mean?
4. Jun 4, 2009
### jostpuur
Suppose $X$ and $Y$ are Banach spaces, $S\subset X$ is some subspace, and that $T:S\to Y$ is a bounded linear mapping. There exists extensions of $T$, which are bounded linear mappings $\overline{T}:X\to Y$. The claim is that if $S$ is dense in $X$, then there exists only one extension $\overline{T}$, and it is the unique extension of $T$ which is unique by the density of $S$.
The proof starts with a remark that the inequality
$$\| Tx_n - Tx_k\| \leq \|T\| \|x_n - x_k\|$$
implies that when $x_1,x_2,\ldots\in S$ is an Cauchy sequence, then so is $Tx_1, Tx_2,\ldots\in Y$.
5. Jun 5, 2009
### creepypasta13
is there any other way to prove it without banach spaces? we've never covered that in my class and its not covered in my textbook
6. Jun 5, 2009
### jostpuur
I was merely trying to mention the fact in a general context. Hilbert spaces are always Banach spaces, and $L^2(\mathbb{R})$ space is a Hilbert space, so what I said does not force anyone away from $L^2$-stuff.
7. Jun 5, 2009
### creepypasta13
you're tying to prove :"The claim is that if is dense in X, then there exists only one extension , and it is the unique extension of T which is unique by the density of S.", right?
i dont understand how the inequality you mentioned implies that the extension is unique
8. Jun 5, 2009
### maze
The idea is to define a new "extended" map F2 such that F2 agrees with F on S, and the way it is extended to L2 is via limits of sequences in S.
In other words, F2(s) = F(s) for all s in S, and F2(x) = limn->infty F(sn) for x not in S, where sn is some sequence in S converging to x.
There are a few things to check to make sure this is legit:
1) If 2 sequences converge to the same point x, F2(x) is the same regardless of which sequence is chosen.
2) F2 is bounded, linear.
3) F2 is the only bounded linear extension of F (uniqueness)
This entire process is known as "extending by density", and shows up all the time. It shows up so much that authors rarely go through the whole process but rather will say things like "the result follows by density".
Last edited: Jun 5, 2009
9. Jun 6, 2009
### creepypasta13
but how do you show the extension is unique? by using the inequality mentioned a few posts above?
10. Jun 6, 2009
### maze
That inequality is important and highly tied up in the whole affair, but generally you would use it on the previous parts. Uniqueness is actually the easy part. Once you know that F2 is a bounded linear extension of F from a dense subset to the whole space, then you can argue as follows:
Suppose there were 2 bounded linear extensions F2 and F3. Since F2 and F3 are continuous, they are sequentially continuous. On the other hand, they must agree on S. Therefore, for any convergent sequence in S, sn->x,
F2(x) <- F2(sn) = F3(sn) -> F3(x).
The only way this is possible is if F2(x)=F3(x), since a sequence can only converge to one thing (in a metric space).
(why? If the points F2(x) and F3(x) are a nonzero distance d away from each other, then just take large enough n such that ||F2(sn) - F2(x)|| < d/2 and ||F2(sn) - F3(x)|| < d/2 and apply the triangle inequality )
11. Jun 6, 2009
### creepypasta13
i think i'm able to prove the existence part, but i dont see how showing F2 is linear is relevant
12. Jun 6, 2009
### maze
It would kind of suck if the Fourier transform wasn't linear. Half the stuff you do with Fourier transforms would go out the widow. )-:
Anyways, it just so happens that it is linear, so you might as well prove it.
EDIt: Oh also, just remembered, you need linearity for boundedness and continuity to be equivalent, so it is important for the proofs. |
## Publication bias?
There’s a new paper called Selective reporting and the Social Cost of Carbon, that is being lapped up with glee by the largely unskeptical. As I understand it, the basic argument is that if one analyses the published estimates for the Social Cost of Carbon, there is an indication of a publication bias, which can then be used to estimate the unbiased Social Cost of Carbon.
When I noticed this, it rang a bell, so I went back through some things and discovered a similar paper with one common co-author. This one is called Publication bias in Measuring Anthropogenic Global Warming and it is quite remarkable, in the seriously, someone’s actually done this? kind of way. When I first saw this, I decided not to discuss it, but thought I might now, as an illustration of what this newer paper has probably done.
Credit : Reckova & Irsova (2015)
The basic argument is related to regression toward the mean. If your initial sample is small, the result could be a long way from the “true” mean, but with a large uncertainty, and could be either larger than, or smaller than, the “true” mean. As you increase the sample size, the difference should get smaller (but with results that are both larger than and smaller than the mean) and the uncertainty should reduce. The larger the sample, the closer the result should be to the “true” mean, and it should become more and more precise. If, however, there is some kind of publication bias (for example, negative results don’t get published) then you would see the results becoming more precise from one side only, as illustrated by the figure on the right.
Credit : Reckova & Irsova (2015)
What they do in this study is to apply the same argument to estimates of climate sensitivity. What they find – as shown in the figure to the left – is that there is a tendency for the more precise estimates to have a lower climate sensitivity. They therefore conclude that there is a bias, saying: In the absence of publication bias these figures should look like an inverted funnel. However, Figure 3 depicts only the right-hand side of the inverted funnel and the left-hand side is completely missing, indicating publication selectivity bias.
They then analyse this and conclude that the unbiased climate sensitivity is somewhere between 1.4oC and 2.3oC, despite the published estimates having a mean of 3.3oC. What they, of course, fail to realise is that the reason the left hand side is missing is not indicative of a publication bias; it’s because it is very difficult to develop a physically plausible argument as to why climate sensitivity should be this low. That the lower published estimate tends to be more precise is largely irrelevant. This is not simply a sampling issue.
So, quite a remarkable idea. Analyse the published results to show that there is some kind of bias in the published estimates, and then use this to present what is meant to be some kind of unbiased estimate. Now, of course, I haven’t gone through their Social Cost of Carbon paper, but if the Anthropogenic Global Warming one is anything to go by, I won’t be taking it too seriously. I really don’t think the scientific method includes a section that says use completely non-existent publications as part of your estimate. I would argue that in any sensible scenario we should base our understanding of these topics on what is actually published, not on what is neither published nor – as far as we’re aware – actually in existence.
This entry was posted in Climate change, Climate sensitivity, ClimateBall, Science and tagged , , . Bookmark the permalink.
### 120 Responses to Publication bias?
1. I do think it’s a legitimate way of approaching and correcting the (real) problem of publication bias in many disciplines. What I think is the problem is that the funnel plot argument only works if the uncertainty is symmetric. But climate sensitivity estimates have a long tail that makes that assumption invalid.
There’s also the problem that many estimates use different assumptions and models so it’s not really a straightforward comparison.
2. Elio,
Yes, I agree. It’s not without it’s uses. As you say, though, this isn’t really appropriate for climate sensitivity estimates. In a sense, you’d need to have actual evidence that these other estimates actually exist and have simply not been published.
3. You forgot to mention the authors are all economists. Or am I just showing my bias? 🙂
4. Yes. In the Cochrane Handbook there’s some discussion about why a funnel plot might be asymmetrical: http://handbook.cochrane.org/chapter_10/10_4_2_different_reasons_for_funnel_plot_asymmetry.htm
Publiation bias is just one of the reasons.
The relevant point here, I think, is the idea of small study effects. Climate Sensitivity estimates with high and low variance are not arrived via the same methods. Simple models with high uncertainty may very well be biased high because they lack the representation of key processes. That alone may be sufficient to explain such extreme outliers.
Also, Cochrane reviews try to get every frikking piece of data, even if it is unpublished. So, to your point, they should actually try to show that those unpublished estimates exist, not just pull them out of their hats.
5. dana1981 says:
The same ‘long tail’ (asymmetric distribution) problem applies to social cost of carbon estimates too. There are some really high SCC estimates, especially among the few papers that include climate impacts on economic growth. And there’s no long tail in the opposite direction – we know the impacts won’t be significantly beneficial (as Tol’s gremlins showed).
It’s a somewhat related problem to the other paper you discuss – particularly high climate sensitivity and/or particularly bad climate change impacts could lead to very expensive consequences, and hence a high SCC.
It irks me a bit that the last author on this paper is a Berkeley guy, as a Cal grad myself.
6. talies says:
Surely estimates which give high climate sensitivity include all sorts of feedbacks which are difficult to measure.
7. Ethan Allen says:
There’s a publication bias. That is the draft paper itself, that you linked to above ” Publication bias in Measuring Anthropogenic Global Warming”, that draft paper mentions an “appendix” six times, that appendix is located here:
The abstract of that appendix states:
“This documents contains details of computation and additional results for “Publication
Bias in Measuring Anthropogenic Climate Change,” which is to be published in Energy &
Environment.”
If that’s the E&E of denier fame, oh boy.
In the appendix they list 16 studies, two are Scafetta (both from 2013, one of those is an E&E paper) and one is Lindzen & Choi (2011) and no other paper postdates circa 2011.
They state that:
“Notes: The search for primary studies was terminated on March 3, 2014”
I would have thought that in the runup to AR5 WG1 that there would have been many more estimates than these “authors” were able to find, in fact the only reference to AR5 is:
Stocker, D. Q. (2013): “Climate change 2013: The physical science basis.” Working Group I Contribution to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, Summary for Policymakers, IPCC.
The “Summary for Policymakers” I mean WTF?
8. Not only is the distribution asymmetric because we know the lower limit much better than the upper limit, which would lead to low estimates needing to have smaller confidence intervals.
There may be a sociological effect as well: the low estimates partially come from the mitigation sceptics and they are typically overconfident and it seems normal to expect them to report too low confidence intervals.
And there could be a methodological effect: Do the low climate sensitivity estimates with a low estimate for the confidence interval come from people using extremely simplified climate models tuned to the global temperatures? (What mitigation sceptics like to “observational” estimates.) That would be comparing apples to oranges and something one would need to take the method used into account in the statistical analysis.
9. matt says:
It would be interesting to see a list of caveats included in these studies (of SCC). I looked into this many years ago and noticed most dealing with more than 2degC change stated something along the lines of “these bad effects are not included because uncertainty is too large/has not been studied, but we know its not good”. Somewhat like the previous IPCC estimates of SLR (the ice models are bad so we won’t include them but please notice this caveat). Also ignoring effects on impacts to EG.
Anyway, here is “Forty Percent Little Fred” claiming the chances of no publication bias is less than 1/”> the number of stars in the universe”. He is addressing the Oz equivalent to the Heartland Institute.
(9:40-15min approx. sorry not sure about the end time and could not be bothered looking it up. beer+ashes+caffeine = hope u understand. There is a reasonable chance the comedy doesn’t stop there)
10. matt says:
Ethan points out E&E. Michaels paper above is also E&E. Seriously attp, no need to dig here. Nothing of interest to be found.
11. “This one is called Publication bias in Measuring Anthropogenic Global Warming.”
My interpretation:If there is a Publication bias in Measuring Anthropogenic Global Warming means that you cannot measure it with high accuracy.No more, no less.
12. BBD says:
Publication bias = E&E
13. Ethan Allen says:
matt,
Michaels (2008) is an E&E paper as you mentioned, that paper is also referenced in the aforementioned appendix above.
14. Ethan Allen says:
OK, Michaels (2008) is referenced in the main draft paper, but not in the appendix. Sorry about that one.
15. beer+ashes+caffeine = hope u understand.
Yes, indeed I do. Although, as I may have mentioned before, my main problem with the Ashes is deciding which team I’d most like to see losing.
16. Publication bias = E&E
Is it regarded as a pretty poor journal?
17. Ethan Allen says:
Well the editor is on record as saying “Denier Papers Welcome” or words to that effect, see:
I’ve seen several dozen of the E&E’s papers with respect to climate change, on a scale of one to five, they rate a zero (or less). 🙂
18. Actually, the Social Cost of Carbon paper is published in Energy Economics, not Energy & Environment. Given I’d quite like a relaxing weekend, I’d probably prefer that we didn’t explicitly mention one of the editors of Energy Economics.
19. Ethan Allen says:
“Actually, the Social Cost of Carbon paper is published in Energy Economics, not Energy & Environment. Given I’d quite like a relaxing weekend, I’d probably prefer that we didn’t explicitly mention one of the editors of Energy Economics.”
I guessed right, took like one second.
The E&E draft paper you linked to above titled ” Publication bias in Measuring Anthropogenic Global Warming” is the paper I am referencing above, not the Energy Economics paper.
20. Ethan,
How do you know it’s in E&E? I haven’t managed to confirm which journal it is being published in.
21. dana1981 says:
Reading a new Citi report on the costs of climate action vs. inaction, I just saw a statement that gets to the point I was making above.
As just one example,
modelling by Ceronsky et al with FUND, a fairly standard IAM, suggests that if the
thermohaline circulation (THC) were to shut down, the corresponding social cost of
carbon (SCC) could increase to as much as $1,000/t CO2. 22. Ethan Allen says: There is a website that includes the appendix that I found and noted above: http://meta-analysis.cz/climate/ There you will see: “Reference: Dominika Reckova and Zuzana Irsova (2015), “Publication Bias in Measuring Anthropogenic Climate Change.” Energy and Environment, forthcoming.” Let me repeat what the appendix states: “This documents contains details of computation and additional results for “Publication Bias in Measuring Anthropogenic Climate Change,” which is to be published in Energy & Environment.” There could always be more than one “Energy & Environment” journal, I wouldn’t know for sure. But if it is the E&E I’m thinking it is, then, oh boy. 23. dana1981 says: And, immediately following that statement: 3. Omission bias may lead to misleadingly low estimates … The main source of concern is that, by definition, IAMs only model the effects that they are capable of modelling. The implication is that a wide range of impacts that are uncertain or difficult to quantify are omitted. It is likely that many of these impacts carry negative consequences. Indeed, some of the omitted impacts may involve very significant negative consequences, including ecosystem collapse or extreme events such as the catastrophic risks of irreversible melting of the Greenland ice sheet with the resulting sea level rise. Other consequences – such as cultural and biodiversity loss – are simply very difficult to quantify and are hence just omitted. 24. Ethan, Gotcha, thanks. Dana, Precisely. It is much more likely that SCC estimates are biased low, than that there are a whole lot of low estimates that have not been published because of the biases of the researchers or the journal editors. 25. anoilman says: I’m only familiar with publication bias in medicine. In that case, the there are motivating reasons behind why, say drug companies suppress poor results and only publish good results for their drugs. I’m not sure I could conclude publication bias was occurring in physics. ’cause its physics. 26. anoilman says: For instance… is June 2015 the hottest month ever, or is that publication bias? This all seems like a silly silly argument. (Would those same people be willing to claim 1998 wasn’t that warm, thus ending all arguments that temperatures in any stalled? I seriously doubt it.) 27. BBD says: AOM I’m not sure I could conclude publication bias was occurring in physics. ’cause its physics. Yes but conspiracy theories and the groupthink meme 😉 And let’s not mention Chris De Freitas. 28. anoilman says: What would he make of gravity I wonder? 29. What would he make of gravity I wonder? Presumably it’s biased because it only attracts? 30. jsam says: Unicorns are underreported. Therefore the exist. I knew it. 31. lerpo says: It may be possible to validate whether this method can be applied to physics by testing it against something that was settled long ago. If it can predict the correct answer before it was settled in the literature then maybe it is worth investigating here as well. Feynman offers a possible topic to study: “It’s interesting to look at the history of measurements of the charge of an electron, after Millikan. If you plot them as a function of time, you find that one is a little bit bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher. Why didn’t they discover the new number was higher right away? It’s a thing that scientists are ashamed of – this history – because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong – and they would look for and find a reason why something might be wrong. When they got a number close to Millikan’s value they didn’t look so hard.” 32. E&E: http://www.desmogblog.com/sonja-boehmer-christiansen Sonja Boehmer-Christiansen Doctorate “International Relations researching into environmental issues in international and also national politics.” Ph.D. in “marine pollution control in the Law of the Sea negotiations.” Master’s Degree, physical geography./ Master’s Degree, social science. Sonja Boehmer-Christiansen is an emeritus reader in geography at the University of Hull and the editor of Energy and Environment, a journal known for publishing the papers written by climate change skeptics. In a 1995 article written by Paul Thacker, Energy and Environment was described as being a journal skeptics can go to when they are rejected by the mainstream peer-reviewed science publications. She has described herself as “an ‘expert’ on the science and politics of global warming since the late 1980s.” Boehmer-Christiansen explained at the time that “it’s only we climate skeptics who have to look for little journals and little publishers like mine to even get published.” According to a search of WorldCat, a database of libraries, the journal is carried in only 25 libraries worldwide. And the journal is not included in Journal Citation Reports, which lists the impact factors for the top 6000 peer-reviewed journals. After a great deal of controversy involving a research paper published by two well-known climate change “skeptics,” Sallie Baliunas and Willie Soon in the journal Climate Change, Boehmer-Christiansen’s proceeded to run a more extensive version of the article in Energy and Environment. “FC: Do you think humans are causing global warming? SC: To be very honest I’m agnostic on this. I don’t have the evidence. I mean I have lots of contradictory evidence but I do think, from my experience on ocean pollution and all the other pollution hypes, that when it goes to the political phase there are huge exaggerations. Once bureaucracies get regulatory and taxation powers, the exaggerations decline, scares may even be forgotten. So I honestly believe that there may be a problem but that this problem also has beneficial sides. We know how positive carbon dioxide is to life. So I do think there’s much exaggeration (of the man-made warming threat), of the negative aspects, for political reasons. So that’s why I’m here (at this Conference). I do think the skeptical scientists are more honest and more truthful than those funded by governments to support the IPCC.” [1] Key Quotes “… As editor of a journal which remained open to scientists who challenged the orthodoxy, I became the target of a number of CRU manoeuvres. The hacked emails revealed attempts to manipulate peer review to E&E’s disadvantage, and showed that libel threats were considered against its editorial team…” [5] July, 2010 Boehmer-Christiansen was one of a group of climate change skeptics who claimed that Phil Jones had manipulated climate data in the IPCC Fourth Assessment Report. Five separate inquiries were conducted to investigate these claims but the conclusion reached was that there had been no scientific dishonesty or misconduct by the IPCC scientists. The last review, done by the Independent Climate Change Email Review (ICCER), responded to Boehmer-Christiansen’s allegations by concluding that she had provided no evidence in support of her claims. May 16 – 18th, 2010 4th International Climate Change Conference hosted by the Heartland Institute. [1] The Scientific Alliance — Advising member. [7] Oil, Gas, Energy Law Intelligence (OGEL) — Contributing Author. [8] Energy and Environment — Editor. etc. 33. Eli Rabett says: A logical explanation is they sent it to Environmental Economics which declined and then to E&E As to Millikan and his oil drop, the real issue was that Millikan used the wrong value for the viscosity of air. This was finally figured out by a young assistant professor at HopkinsJA Bearden ~ 1935. As life would have it, Eli took Sr. Physics Lab many years ago from Bearden, and believe Eli, Bearden was anything but shy about it. He was able to get a more accurate value of e using X-Ray spectroscopy, and this set off a “conversation” between him and Millikan, which eventually came down to Bearden figuring out that Millikan’s student had measured the viscosity of air incorrectly. This is basic to the oil drop experiment because what is measured is the movement of the drops through air. According to Bearden what the student had done was to take the average of previous measures and when his value exactly matched the average he wrote it up and graduated. Unfortunately for Millikan, there was a subtle experimental bias in the apparatus that was used (everyone agreed that the method was a large improvement on other instruments for measuring the viscosity). Anyone doing the oil drop experiment and using the Millikan value for viscosity would get the Millikan value for the charge on the electron. 34. anoilman says: jsam: Actually you’re wrong. Unicorns do exist, its a well known fact; http://www.nbcnews.com/id/25097986/ns/technology_and_science-science/t/unicorn-deer-found-italian-preserve/ However, since we’ve only scene one, that must mean there are a lot more. Maybe there are between 1.4 and 2.4? Did anyone look for its parents? I think not! 35. John Mashey says: 1) It is worth revisiting Morgan and Keith(1995) “Subjective Judgments by Climate Experts.” They estimated climate sensitivity. Expert 5 has a very low estimate, and is also very sure about it. 2) Ignoring the physics, and without looking at this in any great detail, the paper says: “As we cannot be sure about the true distribution of the CS estimates, we assume the standard normal distribution to be the best approximation.” That might be plausible if the differences in estimates are caused by differences in small additive assumptions … but it is not obvious why that should be so, or any more likely than differences caused by multiplicative factors, in which case one would try a lognormal instead and see if that’s a better fit. After all, their figure 1 implies a non-zero probability of negative sensitivity 🙂 Of course, nothing guarantees any particular distribution, but just assuming a normal seems dubious without clear reasoning for such. Again, all this is ignoring the physics. 36. Marco says: Looking at the papers used for the climate sensitivity paper, I wonder how their “publication bias” would look if they had also included papers using paleoclimatological data to estimate ECS. to me it looks like they only used data from papers that used the instrumental record (maybe with exception of 1-2 papers). Add a few papers from known contrarians (3 out of the 16), and you bias it even more. Perhaps one can speak of a selection bias as a potential reason for the supposed publication bias. The final potential issue is the comparison of older papers and newer papers when using that instrumental record, since the supposed ‘slowdown’ since about 1998 has significant effects on the ECS calculations when you use more data since 2000, if I understand it correctly. So, a paper from 2003 using data up to 2002 is likely to give a higher ECS than a paper from 2011 using data up to 2010. But new data giving a different (and lower) result are not any evidence of publication bias, since it is based on new data not available for the older estimates. Anyone see any obvious flaws in my assessment? I’d really be happy to hear them. 37. Marco, Yes, I agree that their sample could well have been biased and they didn’t seem to include many (if any) paleo papers. I’m trying to remember if the surface warming slowdown does affect ECS. I’ve seen arguments suggesting that it doesn’t, but I can’t quite remember what they were. 38. Marco says: Thanks, ATTP. 39. Just remembered that I think the point about ECS not being influenced by the slowdown is that the energy balance approach is essentially $ECS = \dfrac{\Delta F_{2x} \Delta T}{\Delta F - \Delta Q},$ where $\Delta Q$ is the system heat uptake rate. If $\Delta T$ goes down then – in the absence of variability – $\Delta Q$ goes up, and the ECS is unaffected. I think the is correct on average, but not necessarily at all instants (Palmer & McNeal – I think – show this). So, the slowdown could have influenced ECS. I also noticed that the Social Cost of Carbon paper has a dig at Cook et al., saying Given how important climate change research is for current policy making, we believe more work is needed on selective reporting in the field. For example, in the light of our results the 97% consensus on human-made climate change reported by Cook et al. (2013) should be understood as the upper boundary of the underlying consensus percentage, because Cook et al. (2013) do not account for potential selective reporting. 40. matt says: attp, Cmon, just put on ur green and gold. OZ lost the ashes so you can cheer for that, now cheer for the other enemy to lose the final match. Seems like a safe bet too. Weird series. A reluctant congrats to the barmy. The targeting of Cook seems odd. It seems that those who criticise the consensus pick out Cook and pretend it hasn’t been shown multiple times before (Oreskes 2004?, Anderegg et al, Doran & Zimmermann, ….). 41. Steven Mosher says: $ECS = \dfrac{\Delta F_{2x} \Delta T}{\Delta F - \Delta Q}$, arent confidence intervals on ratios inherently nasty. 42. Steven Mosher says: shit moderator help [Mod: sorted.] 43. Rob Nicholls says: This is really interesting – I’ve seen funnel plots like the one shown at the top of the post in medical stats textbooks, but hadn’t encountered this sort of thing in the climate wars before. This line in the new paper’s abstract really made me chuckle: “Our estimates of the mean reported SCC corrected for the selective reporting bias range between USD 0 and 134 per ton of carbon at 2010 prices for emission year 2015.” So the social cost of carbon dioxide emissions may be as low as zero. Cancel the World Climate Summit in Paris! I really should get around to learning more about how estimates of the social costs of greenhouse gas emissions are calculated. I’m strongly in favour of putting taxes on GHG emissions (with any necessary tweaks to ensure such taxes are progressive), although the price will have to be high enough, and there are huge vested interests which I fear will do everything they can to make sure that the price is never high enough. I think estimates of the real costs of GHG emissions might be useful as long as it is acknowledged that they can never hope to capture the true costs because a lot of things cannot be quantified in monetary terms. Surely estimates of social costs of carbon must be subjective, and dependent on the value systems employed; e.g. if I believe that the cost arising from the quite possible extinction in the wild of the orange-spotted filefish (a fish heavily dependent on corals) due to climate change would be infinite (and that therefore the true cost of GHG emissions is arguably infinite), I would struggle to see how anyone can refute that objectively as this would be a matter of subjective value judgments. I’m not aware of any method of estimation that can overcome this problem of dependency on value judgments and I don’t believe that it’s possible that such a method can exist. I’m happy to be corrected on this. 44. John Mashey says: Most of this is about the “earlier” paper, by Dominika Reckova and Zuzana Irsova, both at Charles U. The Appendix to the *earlier* paper says:* “With asymmetric distributions this assumption does not necessarily hold, but there is no reason why climate sensitivity estimates should not be distributed symmetrically.” Well, economists said so. It gives 16 primary studies, of which 3 are: Lindzen and Choi(2011) Scafetta(2013a) Scafetta(2013b) i.e. “Lindzen, R. S. & Y.-S. Choi (2011): “On the observational determination of climate sensitivity and its implications.” Asia-Pacific Journal of Atmospheric Sciences 47(4): pp. 377–390. Scafetta, N. (2013a): “Discussion on climate oscillations: Cmip5 general circulation models versus a semi-empirical harmonic model based on astronomical cycles.” Earth-Science Reviews 126: pp. 321–357. Scafetta, N. (2013b): “Solar and planetary oscillation control on climate change: hind-cast, forecast and a comparison with the cmip5 gcms.” Energy & Environment 24(3): pp. 455–496.” See arXiv version, just flip through pages, then skim the references. See how many “interesting” names you can find. In the paper itself, we find (as somebody mentioned) “Michaels, P. J. (2008): “Evidence for” publication bias” concerning global warming in science and nature.” Energy & environment 19(2): pp. 287–301.” We also find of ~44 references, 14 are first-authored by Havranek and 6 coauthored, i.e.. 20,, almost half of the references, That may be OK, or it may not be. It would have been nice had Dominika Reckova and Zuzana Irsova shown more familiarity with climate science literature. ======== Finally, although I’d guess this is just accident, (Czech Republic is not huge), the affiliations for the *new* paper are: “Tomas Havranek a, b, Zuzana Irsova,b , , Karel Janda b, c, David Zilberman d a Czech National Bank b Charles University, Prague c University of Economics, Prague d University of California, Berkeley” Vaclav Klaus graduated from U of Economics, and was at the Czech National Bank (under its previous name). 45. izen says: @-Rob Nicholls “I’m not aware of any method of estimation that can overcome this problem of dependency on value judgments and I don’t believe that it’s possible that such a method can exist. I’m happy to be corrected on this.” Some months ago on this blog we were all lucky enough to be informed about this very issue by an economist whose name must not be spoken, (say it three times and bad things happen…). The mistake you are making is to think that because value judgements are subjective they have any value. Economist know that value is unmeasurable and indefinable, but you can always put a number on the price. On the issue of coral reefs, specifically the risk of ocean acidification on the Australian Great Barrier Reef, he had this to say;- “Valuing natural resources is something that many environmental economists do for a living. A common finding is that the vast majority of people cares a little about these matters, and a small minority cares a lot.” Continuing… “With 2 million visitors a year, the Great Barrier Reef isn’t even Australia’s top attraction; the Sydney Opera House has 8 million. …Even if ocean acidification would completely destroy the Great Barrier Reef, which it will not, then the impact on the global tourism industry is small. Even the Australian tourism industry is unlikely to take a big hit, as capital and labour in tourism are rather mobile. The more likely scenario, however, is that local tourist operators will preserve that bit of the Great Barrier Reef that attracts tourists. After all, that’s what they do with Venice, ski slopes, and sandy beaches.” So now you know, subjective values do not count, it’s the numbers of how many people are prepared to pay what price that is the only objective measure in these matters. 46. @Rob N “I believe that the cost arising from the quite possible extinction in the wild of the orange-spotted filefish (a fish heavily dependent on corals) due to climate change would be infinite” People’s values are people’s values. The job of an economist is to measure these value, rather than to pass judgement. That said, your statement is peculiar. If you think that the value of the filefish is arbitrarily large, then you should be willing to give up anything that has a finite value if that would ever so slightly increase the chance of the filefish surviving. Giving up anything would include your use of the internet, and the carbon dioxide thus emitted. 47. izen, At least you tried. 48. Rob Nicholls says: Izen, thanks v much for your response. “So now you know, subjective values do not count, it’s the numbers of how many people are prepared to pay what price that is the only objective measure in these matters.” OK, but the price that people are prepared to pay does not in my opinion give the full value of something. Although we might be able to ask people how much they value the continued existence of a certain species of fish, we’re not able to ask future generations of humans or members of the species of fish itself. Maybe this is a bad example, perhaps a better one would be to ask what’s the monetary cost of the death of a human being due to flooding or crop failure caused by climate change. I think the answer to that is subjective and political and not objective. At least internalising some of the cost of GHG emissions is better than not internalising any of it, but I think it would be wrong to think that the cost could be properly calculated as then some people might be tempted to think that after 1) calculating an objective cost and 2) applying that as a price to GHG emissions, we’ve internalised the cost and solved the problem. 49. Rob Nicholls says: Thanks Richard Tol. “People’s values are people’s values. The job of an economist is to measure these value, rather than to pass judgement.” I’m okay with that, and it wouldn’t be fair to expect more from economists than to do this as well as they can (and I’d hope that economists would do their best to account for differences in purchasing power when doing this); however I would hope that people realise that the value of some things cannot be expressed in monetary terms. 50. Eli Rabett says: Now some, not Eli to be sure, might ask whether Richard Tol is worth a single filefish 51. Willard says: > People’s values are people’s values. The job of an economist is to measure these value, rather than to pass judgement. This presumes two dubious ideas: that economics is value neutral and that economists model people‘s values. The two dubious ideas might be interconnected: Consider an example. The concept of Pareto efficiency is defined in value-neutral terms: a distribution is Pareto-efficient if there is no other distribution that improves some individuals without harming at least one individual. The concept of distributive justice is not value-neutral; it invokes the idea that some distributions are better because they are more fair or more just than others. The positive economist holds that the latter set of distinctions are legitimate to make — in some other arena. But within economics, the language of justice and equity has no place. The economist, according to this view, can work out the technical characteristics of various economic arrangements; but it is up to the political process or the policy decision-maker to arrive at a governing set of normative standards. Walsh and Putnam (as well as Amartya Sen) dispute this view on logical grounds; and this leaves the discipline free to have a rational and reasoned discussion of the pros and cons of various principles of distributive justice. http://economistsview.typepad.com/economistsview/2012/03/value-free-economics.html 52. @Rob The value of filefish can and has been expressed in monetary terms. You should note that this is the value of filefish to humans, rather than the value of filefish to filefish. Eli just perfectly illustrated the method: Given the choice, would you rather save a single filefish or me? 53. izen says: In the field of clinical biology studies investigating publication bias are motivated by the knowledge that it does exist for a well known reason. Medical R&D is known to carry out research and obtain at least preliminary results, but that research is never published. The industry has a financial motive to promote ‘good’ research results, and suppress ‘bad’ results. The distribution of results does not have to be purely Gaussian. Clinical research may often have results that are inherently skewed, with fat tails and fixed upper or lower bounds. But the PDF is usually at least implicit in the method and often explicitly discussed in the context of what the research can measure and what the results imply. A common methodology that ‘should’ detect a particular distribution of benefit and harm from a clinical treatment, but which in the published literature only shows the ‘good’ side of the outcomes is likely to raise suspicions. One response to this is the adoption of research methods that confine the range of outcomes they are capable of testing for. This is often justified as the adoption of more accurate methodologies because they are less prone to variance or uncertainty. If you only look for the positive results you want, then the absence of negative results is not exactly publication bias… To apply the methods of detecting publication bias to the value of ECS would seem to make the unstated assumption that the methodologies used to determine this value are capable of generating a wider range of results, but that there is a significant number of unpublished results that have been suppressed for some unstated reason. In the case of clinical research no conspiracy theory is required, there is ample evidence the practice exists, and for a known reason. But making this assumption about the research into climate sensitivity does seem to straying into a conspiracy theory in which around the world scientists are filing in the bottom draw any results on ECS that should occupy the other part of the ‘funnel plot’ they make. To most rational people such a conspiracy, or even group-think effect seems unlikely. But ironically a possible explanation for their result is a preponderance of research that is following the Pharma playbook, and using methodologies that give ‘good’ results with smaller error bars although others in the field suspect the methodology may be inherently incapable of capturing the true probabilities in the upper range. 54. whimcycle says: Richard’s is but another variation of the classic contrarian response: If you guys were TRULY concerned about emissions, you would have killed yourselves already. 55. Richard, The value of filefish can and has been expressed in monetary terms. Are you really able to reliably include the ecological complexity? would you rather save a single filefish or me? I would regard this as a question that isn’t worth answering. 56. Joshua says: ==> “The job of an economist is to measure these value, rather than to pass judgement.” Interesting. What is your methodology for measuring values, with controlling for accountability bias, the paradox of choice, hyperbolic discounting, confirmation bias, variances in how people maximize utility, etc.? ==> “If you think that the value of the filefish is arbitrarily large, then you should be willing to give up anything that has a finite value if that would ever so slightly increase the chance of the filefish surviving.” Interesting. I read someone somewhere write that the job of an economist is not to pass judgement. I just can’t quite rememb…… Oh. Wait. 57. Joshua says: ==> “The job of an economist is to measure these value, rather than to pass judgement.” This is really quite interesting. Richard, I’d say that virtually all your comments in these threads express your judgements of people’s values. 58. @joshua “should” here follows from the conditional clause starting with “if” Rob should do what Rob thinks best. He’s a grown man. It’s a free country. 59. If you want a taste of how economists measure value, why not try this: http://www.surveygizmo.co.uk/s3/2156353/Dolton-Tol-beta 60. Richard, The last time you linked to a survey on my site, I ended up deleting the links because it appeared that you did not have suitable ethics approval. Can you confirm that you do for this one? 61. @wotts Yes, we have ethics approval for this survey. 62. Okay, thank you. 63. Joshua says: Richard, let’s try again. Why don’t you address this point: “I’d say that virtually all your comments in these threads express your judgements of people’s values.” Often, you express judgements of the values of particular groups, and sometimes you express judgements of the values of individuals. How do you reconcile your behavior with your description of what economists do? 64. Joshua says: And Richard – The other question I asked was about how you control for how people respond to questions in order to assess their “values.” I referenced a number of complicating factors that speak to the complexity of transforming expressed opinions into an assessment of values. Perhaps you could speak to how you do that – which was the point of my comment. Just linking to one of your polls doesn’t actually address my point. I wasn’t asking how you sample opinions. Are you trying to actually address my questions? Because if so, it seems to me that you’re doing a very poor job so far. 65. Joshua says: I mean seriously, look at this: Here’s my question (with bold added to make the problem more apparent): “Interesting.What is your methodology for measuring values, with controlling for accountability bias, the paradox of choice, hyperbolic discounting, confirmation bias, variances in how people maximize utility, etc.? And in response you linked a poll for how you measure opinions… with the following description from the survey: “By answering these questions, you will help researchers at the University of Sussex to understand what people know and think about public policy and its various domains.“ If you are using the poll to measure values, why are telling them that you are measuring what they know and think about public policy? Don’t you think that there’s an ethical problem where you tell people that you’re doing something other than what you say elsewhere that you’re doing? How did you get ethics approval for that? What did you indicate in your submissions for ethics approval – did you say that the survey was intended to measure values? 66. Joshua says: And Anders…do other people really end up in moderation as much as me? Is this yet more evidence that you’re trying to censor me because my arguments are so devastating to your world view? 🙂 67. Actually, quite a few do and I don’t always know why. It’s not just you, as much as you might like that to be the case 🙂 68. Joshua says: Finally, Richard – ==> “@joshua “should” here follows from the conditional clause starting with “if”” This, also, seems hard to reconcile with your previous comment: “That said, your statement is peculiar. If you think that the value of the filefish is arbitrarily large, then you should be willing to give up anything that has a finite value if that would ever so slightly increase the chance of the filefish surviving.” Are you actually contending that isn’t a judgement of values? Your argument there looks to me like a judgement of values. You are asserting (behind a veil of plausible deniability) an inconsistent or hypocritical approach to values – which is, in itself, a judgement of values. 69. Joshua says: Yeah. Sure. That’s what they all say. 🙂 70. Joshua says: It’s always a challenge to get Richard to actually address points that I’ve made. I fear, however, that that my efforts will be in vain. Dude’s a non-sequitur machine. 71. Kevin O'Neill says: Rob Nicholls writes: ““I’m not aware of any method of estimation that can overcome this problem of dependency on value judgments and I don’t believe that it’s possible that such a method can exist.” Rob, one would hope that not only is the present value of ‘filefish’ included in these economic models but that it’s also included in the discount rate. We could ask, How many and how much would people pay to see a live Dodo? The Passenger Pigeon has been extinct now for a 100 years. Does anyone lament them? How could they – no one alive remembers them. Those that watched them disappear certainly did. Here are the words of Simon Pokagon, a Potawatomi tribal leader, recounting in 1895 an event he witnessed nearly a half century earlier: While I gazed in wonder and astonishment, I beheld moving toward me in an unbroken front millions of pigeons, the first I had seen that season … I have stood by the grandest waterfall of America,” he wrote, “yet never have my astonishment, wonder, and admiration been so stirred as when I have witnessed these birds drop from their course like meteors from heaven. There are also the unknown possible benefits lost. Many species are lost before they’re even discovered or studied. What could they have taught us about new chemical or biological ‘tricks’ that Mother Nature evolved to fill a specific niche? We could ask an econometrican what price the Passenger Pigeon carries in their models – I suspect it’s zero. I also suspect that if Simon Pokagon were alive today he’d give a slightly different answer. 72. izen says: @-Kevin O’Neill “We could ask an econometrican what price the Passenger Pigeon carries in their models – I suspect it’s zero. I also suspect that if Simon Pokagon were alive today he’d give a slightly different answer.” The potential financial gains from the revival of extinct reptiles has been explored in film. The passenger pigeon is rather less likely to be a big audience draw although the feasibility is higher. Wooly mammoths are probably somewhere near the feasibility/profitability cusp. But you need an expert of such things to determine that. However from observation such experts seem to be chosen for their ability to provide an economic justification to satisfy those that pay them. Economeretricians! 73. @joshua I made only a few interventions on this thread, and in only one did I express a judgement. I judged the credibility of Rob’s claim that filefish are infinitely valuable, as it is inconsistent with his observed behaviour. As to the other points you raise: All true, all subject to active research. 74. BBD says: @ Joshua And Anders…do other people really end up in moderation as much as me? It happens to me regularly too. 75. Willard says: > In only one did I express a judgement. This makes at least two. Speaking of which, what about two filefishes? 76. izen says: @-” I judged the credibility of Rob’s claim that filefish are infinitely valuable, as it is inconsistent with his observed behaviour.” One of the major flaws in the ‘rational economic agent’ assumption is that economic behavior is directly correlated with values. Sometimes the link is even supposed to be linear! There are significant external costs in car ownership and gun ownership in those nations where it is widespread. They are major causes of mortality and morbidity. However the value we place on the ability to own and use those machines prevents any imposition of a tax to represent those external costs. Certainly there is no expectation of imposing costs to eliminate the source of the harm, just enough to maximise tax income for the state. It is unlikely a carbon tax would play a different role. 77. Rob Nicholls says: Thanks Richard Tol for your responses, and the survey link, and thanks to everyone else for their comments. ATTP, sorry if I sent this thread off topic. I think I’ve said what I wanted to say already, but I’ll think about this more and will think about what people have said. 78. Joshua says: izen – ==> “One of the major flaws in the ‘rational economic agent’ assumption is that economic behavior is directly correlated with values.” Yes, that’s very much what I was getting at. Unfortunately, it seems that at least for the purposes of the discussion here, Richard is quite content to make broad assumptions in exactly that regard – and isn’t interested in discussing the foundation of his assumptions. I would imagine that within his more professional framework, he’s careful about not taking those assumptions for granted – which then leaves the question of why he’d display such different reasoning here than what he engages in professionally. 79. bill shockley says: I know this topic is not about climate sensitivity and the social cost of carbon, but I also think I correctly presume that many here have a genuine interest in those topics. So, with the hope that ATTP welcomes suggestions for future topics, I note that James Hansen has written an article for the Huffington Post elucidating his long and difficult recent sea level rise paper which, it turns out, was 8 years in the making (clip): 2°C is not only a wrong target, temperature is a flawed metric due to meltwater effect on temperature. Sea level, a critical metric for humanity, is at least on the same plane. 80. Paul S says: As Steven Mosher was possibly alluding, the ECS energy balance formula structurally will return larger uncertainty for larger ECS values for a given level of uncertainty in the inputs. E.g. For DeltaT = 1K +/- 0.2, DeltaF = 1W/m2 +/- 0.2, ECS = 2.5-5.6K (3.1K spread) For DeltaT = 1K +/- 0.2, DeltaF = 1.5W/m2 +/- 0.2, ECS = 1.7-3.4K (1.7K spread) This situation is compounded by negative aerosol forcing being the major, or in some cases only, source of uncertainty used in net forcing. Generally, larger (more negative) aerosol forcing estimates have larger uncertainties with the result that studies with higher net forcing (less negative aerosol estimate) will likely have lower net forcing uncertainty. In summary, the plot of precision vs. CS estimate is what I would expect from an unbiased sample in this context. 81. Paul, I think I see what you’re getting at. It’s the standard error propagation formula. If $R$ is given by, $R = \dfrac{X Y}{Z},$ then $\delta R$ is given by $\delta R = \left| R \right| \sqrt{ \left( \dfrac{\delta X}{X} \right)^2 + \left( \dfrac{\delta Y}{Y} \right)^2 + \left( \dfrac{\delta Z}{Z} \right)^2 },$ So, the larger the value of $R$, the larger the uncertainty might be. I wasn’t sure I followed this, though Generally, larger (more negative) aerosol forcing estimates have larger uncertainties with the result that studies with higher net forcing (less negative aerosol estimate) will likely have lower net forcing uncertainty. 82. Paul S says: To use the previous example, let’s say the 1W/m2 was made up of +2W/m2 from all other sources and -1W/m2 due to aerosols. The aerosol forcing uncertainty is +/- 0.4W/m2, which is used to determine the full net forcing uncertainty range of 1W/m2 +/-0.4. Then ECS range, using same DeltaT, is 2.1-7.4K If the central aerosol estimate is -0.5W/m2 with smaller uncertainty of +/-0.2 then ECS range will be 1.7-3.4K. Because net forcing uncertainty was determined by aerosol forcing uncertainty and aerosol forcing uncertainty was larger with the more negative central value and a more negative aerosol forcing value means higher sensitivity, the spread became even larger for higher sensitivity values. 83. Paul, Okay, yes, I’m with you now. The more negative the aerosol forcing, the larger the mean ECS estimate, and the larger the uncertainty range. So, there are plausible argument as to why the increase in uncertainty with increasing mean ECS is not indicative of a bias, but is simply a consequence of basic error propagation. 84. Rob Nicholls says: I just re-read Izen’s August 23, 2015 at 2:01 pm…I think it may have been ever so slightly tongue in cheek and I did not realise it earlier. I think Poe’s law will get me every time. 85. Eli Rabett says: Of course, the real issue is the future value of filefish. 86. Mal Adapted says: I’d pay$1000 to see a living Ectopistes migratorius. I’d pay $10,000 to see a brood of fledglings from a mated pair. I’d pay$100,000 to see a flock of them roosting in a grove of living, mature Castanea dentata. I’d require genetic verification of all individuals before paying, of course.
87. From what I can tell, the Havranek study appears to suffer from a fundamental methodological flaw.
Surely an increasing SCC spread is simply the mechanical result of a damage function that is convex in temperature (i.e. an assumption built into all IAMs? Far from pointing to publication bias, the widening confidence intervals are exactly what I would expect given the standard IAM setup.
A toy example:
– You want to estimate the SCC using an IAM in which climate damages are simply the square of temperature, i.e. D = T^2.
– Let’s say your model is used to evaluate the costs associated with a global temperature increase that will be somewhere between 0 and 2 degrees with uniform probability. The implied spread on damages is then (2^2 – 0^2 =) 4.
– Now imagine that your model is used to evaluate costs of a slightly higher temperature range between 1 and 3 degrees (again assume uniform probability)? Well, your damages spread increases to (3^2 – 1^2 =) 8!
Clearly the increase in this little example has nothing to do with publication bias. (How can it? We’re using exactly the same “model”.) Instead, the increasing spread has everything to do with the mechanics of the model setup.
Now, I first asked this question over a month ago (which is when I first became heard about the Havranek paper), but I still haven’t received a convincing answer… Can anyone convince me that the very same thing isn’t happening with the published SCC results?
88. PS – I should add that I haven’t had time to read the full paper yet… but if the authors haven’t explicitly controlled for this issue, then I simply can’t see how their method is valid.
89. Willard says:
For what it’s worth, I left this comment at Judy’s:
Actually, Cap’n, the E&E authors might even be able argue for publication bias as soon as scientific results evolve non-randomly.
Econometrics’ the new sophistry.
http://judithcurry.com/2015/08/22/week-in-review-energy-and-policy-edition-9/#comment-726600
90. John Mashey says:
PART 1
Good. we’re back on the post’s main topic, the Paper and its Appendix. PDF page #s (not necessarily same as those printed) are used below for simplicity.
SUMMARY … surveying opinions about the Moon shows that some think it is made of cheese, such as noted researchers Wallace and Gromit.
1) Reckova and Irsova used 48 data points from 16 papers, most of which are relatively old, only 1 of which was used by IPCC AR5 WG I.
2) There is little evidence they read and understood the relevant AR5 Chapter 10, which has a substantial discussion of sensitivity, and 19 papers, most newer than those they used. From some comments, they seemed unaware of the literature, saying
” Estimates of climate change and climate sensitivity occur only rarely in the scientific literature”
Paper: 44 total references, of which 8 were climate *science*, including 2 for AR4 and 1 for AR5, none with page numbers.
It is not a plus for credibility to reference 1000-page volumes without giving pages or sections.
15 first-authored by Havranek, 6 coauthored by Havranek = 21.
It might have been better to have spent more effort getting familiar with the literature.
3) Of their 48 data points, 11 came from Lindzen+Choi(2011), who computed different numbers for others’ studies and provided another of their own, which was the smallest CS and the one with the smallest uncertainty. This work and its predecessor had issues.
4) The conclusions of bias in the paper rest strongly on Figure 3, especially the 4 papers in upper left corner, plus one not shown.
Neither the Paper not the Appendix gave any mapping from charts to studies they were based on, so I had to search for the references and look at papers until I found at least those in that upper left corner, on which so much of the argument rests.
19 Lindzen+Choi(2011) was omitted as it was off the chart, although it showed up in Fig 4.
12 Scafetta(2013a), “cycles” published in Energy and Environment
20 Scafetta(2014), “planets”
17 Hargreaves and Annan (2009), except the numbers given were *not* theirs, but part of their refutation of Chylek+Lohmann(2008), hence this is in some sense a false
7 Andronova and Schlesinger (2001) gave 4 sets of numbers for different model parameters, and this low sensitivity number was not recommended. See below
5) They wanted to assume that estimates should be normally distributed, despite the fact that implies 1/20 CS estimates would be negative, and 1/8 below 1, against which there is overpowering evidence. Every one of the 5 low-sensitivity-high-precision points has problems, of the sort that happen when people who are not domain experts read detailed technical papers, and without knowing the credibility of various authors or realizing that some numbers were only mentioned to be refuted.
They also didn’t seem to understand the studies that explore parameters, giving the same weight to deprecated combinations as to those thought more relevant.
6) Paper p.4 favorably cites Michaels(2008) … who possibly might have his own publication bias, given a long history of substantial fossil funding, and that paper is in E & E. This is not a plus for credibility. Likewise, there seems a random scattering of possibly cherry-picked technical papers … but zero evidence of thorough reading of IPCC AR5’s relevant section, i.e., the latest major assessment.
Next part has the details.
91. John Mashey says:
PART 2a – DETAILS
Appendix p.2: They have 48 points, from 16 studies, including Lindzen & Choi(2011) or L+S, Scafetta(2013a) and Scafetta(2013b), in E&E, from arXiv versions.
Scafetta(2013a) (CYCLES) ” Power spectra of global surface temperature (GST) records (available since 1850) reveal major periodicities at about 9.1, 10-
11, 19-22 and 59-62 years. … This hypothesis implies that about 50% of the ∼ 0.5 oC global surface warming observed from 1970 to 2000 was due to natural oscillations of the climate system, not to anthropogenic forcing as modeled by the CMIP3 and CMIP5 GCMs. Consequently, the climate sensitivity to CO2 doubling should be reduced by half, for example from the 2.0-4.5 oC range (as claimed by the IPCC, 2007) to 1.0-2.3 oC with a likely median of ∼ 1.5 oC instead of ∼ 3.0oC.”
Scafetta(2013b) (PLANETS) “It is found that: (1) about 50-60% of the warming observed since 1850 and since 1970 was induced by natural oscillations likely resulting from harmonic astronomical forcings that are not yet included in the GCMs; …
equilibrium climate sensitivity to CO2 doubling centered in 1.35 oC and varying between 0.9 oC and 2.0 oC.
Paper p.9, Fig 3: Funnel plot for CS gives the CS estimate, and the precision, which is inverse of Std Error they computed in Appendix Table 11, I think.
They say ” Notes: This figure excludes the single most precise estimate from the data set to zoom in on the relationship.” (Lindzen and Choi)
It uses Se(low), whereas Fig 4 uses Se(high).
Appendix Table 11, p.9 lists the data points, but doesn’t identify which studies they came from, which means one has to go dig them out.
They are numbered from 2 to 20, with 1,14,15, missing, and the number of data points varies.
For instance, study 2 has one point, study 19 (Lindzen and Choi) contributes 11, almost 1/4 of the total.
It looks like the Paper just took L+S’s 0.7 main result, and added the 10 of the 11 items from L+S Table 2, p.9, omitting the one called infinity.
Here are the 8 of 48 points with CS less than 2. I added the precisions for the Se(low) and Se(up) the *’d ones are the group at top left in Fig 3.
Study CS Low Upper z Se(low) Se(Up) Prec(low) Prec(up) Precision = inverse of Se’s
* 7 1.43 0.94 2.04 1.96 0.298 0.371 3.36 2.70 *Andronova+Schlesinger(2001), p.6, for case T1, NOT their preferred case T3
11 1.54 0.3 7.73 1.96 0.754 3.763 1.33 0.27
*12 1.5 1 2.3 1.96 0.304 0.486 3.29 2.06 * Scafetta(2013a)
*17 1.8 1.3 2.3 1.96 0.304 0.304 3.29 3.29 * Hargreaves and Annan(2009), but that is really from Chylek+Lohmann, see below.
19 0.7 0.6 1 1.96 0.061 0.182 16.39 5.49 X L+S Table 2, p.9, omitted from graph, but (I don’t think) from stats
19 1.7 0.9 8 1.96 0.486 3.83 2.06 0.26 – L+S Table 4, ECHAM5/MPI-OM … recalculated by L+S
19 1.7 1 8.8 1.96 0.426 4.316 2.35 0.23 – L+S Table 4, UKMO-HadGEM1 … recalculated by L+S
*20 1.35 0.9 2 1.96 0.274 0.395 3.65 2.53 * Scafetta(2013b)
PART 2b
Andronovo+Schlesinger(2001) p.6 say:
“Because Tl has no ASA forcing, its mean (μ = l.43°C), median (m = l.38°C), standard deviation (cr = 0.35°C), and skewness (s = 0.80) are small, and its 90% confidence interval, 0.94°C to 2.04°C, is narrower and shifted toward smaller values than the IPCC range of l .5°C to 4.5°C. …
If one were to make a “best estimate” of this, one would likely choose T3 which does include ASA forcing and does not include solar forcing.”
The paper’s authors do not seem to understand this sort of experiment, in which an unrealistic assumption (no aerosol forcing) is made to see its effects.
Paper p.4 says:
” Andronova & Schlesinger (2001) disagree with the third IPCC report and argue that climate sensitivity lies with 54% likelihood outside the IPCC range. They find that the 90%confidence interval for CS is 1 to 9.3.”
It is not a plus for credibility to quote a 14-year-old paper arguing about the TAR.
“Masters (2013) notes a robust relationship between the modeled rate of heat uptake in global oceans and the modeled climate sensitivity. This signals that researchers could have ways of influencing their results.”
Climate scientists deal with complex data and have to make assumptions, and their results unsurprisingly differ.
Masters writes:
” The observational estimate for climate sensitivity of 1.98 K [1.19–5.15 K] produced by this method is slightly lower than that of the IPCC AR4″ and then goes on to discuss the issues .. with no hint of suspicion that I could see that people were distorting their results.
Hargreaves and Annan(2009) were *refuting* a paper and its estimates:
” The sensitivity of the climate system to external forcing has long been a subject of much research, the bulk of which has concluded that the climate sensitivity to a doubling of CO2 is likely to lie in the range 2–4.5 C (IPCC 2007: Summary for Policymakers, Solomon et al., 2007; Knutti and Hegerl, 2008). Chylek and Lohmann (2008) (hereafter CL08) claim to have found evidence that the true value is much lower, around 1.8C, and present two main arguments in support of their claim. …
The climate sensitivity to a doubling of CO2 is now estimated to be about 3.5oC, with a 95% range of 2.6–4.5oC, compared to CL08’s estimate of 1.3-2.3oC.”
PART 2c
Paper p.4 makes a curious claim:
Estimates of climate change and climate sensitivity occur only rarely in the scientific literature. For instance, the fifth assessment report of the Intergovernmental Panel on Climate Change (IPCC) predicts only that climate sensitivity probably ranges from 1.5 to 4.5 with high confidence and is extremely unlikely to be lower than 1, again with high confidence (Stocker 2013).”
They did not cite Knutti and Hegerl (2008), “The equilibrium sensitivity of the Earth’s temperature to radiation changes”. This key figure summarized the various lines of evidence and constraints.
Despite mentioning IPCC AR5, they didn’t cite IPCC AR5 (2013) WG I, Fig 10.20 (p.941) and surrounding discussion, section 10.8. The gave specific page numbers for TAR and AR4, not AR5.
Among other things, the graphs identify the studies (unlike the paper under discussion), so for instance, one can see LiIndzen & Choi(2011) as a real outlier at left. (small brown bar). The overall assessment is 10.8.2.5, p.940 says:
“Some recent studies suggest a low climate sensitivity (Chylek et al., 2007; Schwartz et al., 2007; Lindzen and Choi, 2009). However, these are based on problematic assumptions, for example, about the climate’s response time, the cause of climate fluctuations, or neglect uncertainty in forcing, observations and internal variability”
AR5 critiqued the same Chylek+Lohmann (2008) paper critiqued by Hargreaves+Annan.
The Paper has 16 studies, dated 2001-2013, but only 4 from 2010-2013.
The AR5 Fig 10.20 uses 19 studies, of which 16 are from 2010-2013, and one each from 2008, 2008, 2009.
The only reference in both is L+S(2011), although AR5 has later papers by Hargreaves and Annan (2012 vs 2009), Murphy et al (2009 vs 2004).
ECS LESS THAN 1, EVEN NEGATIVE
Like I said before:
“The Appendix to the *earlier* paper says:*
“With asymmetric distributions this assumption does not necessarily hold, but there is no reason why climate sensitivity estimates should not be distributed symmetrically.”
“After all, their figure 1 implies a non-zero probability of negative sensitivity :-)” (me)
Figure 1’s dashed line is a normal distribution, which they think should be reflected in the papers.
It has a mean of 3.27, with a density of ~0.2, which needs a Standard Deviation of about 2 (they got 1.96, which I used.).
So:
DENS CUM Sensitivity estimate
0.01 0.00 -2
0.02 0.01 -1
0.05 0.05 0 Thus 1/20 estimates ought be negative.
0.10 0.12 1 1/8 could be less than 1deg, which IPCC says extremely unlikely
0.17 0.26 2
0.20 0.50 3.27
0.19 0.65 4
etc
Paper, p.8 says:
“The left-hand side of the graph is completely missing and the shape of the solid line representing the kernel density of the CS estimates does not correspond to the normal density, shown as the long-dash dot line. All the figures indicate publication selectivity bias”
I think something else is indicated, about the authors’ familiarity and understanding of the literature.
They end:
“A lower estimate of climate sensitivity would imply a lower estimate of the social cost of carbon. This, in turn, would influence the amount spent on reducing carbon dioxide in the atmosphere. This money could be spent on other areas of environmental protection.”
92. Grant,
It sounds like your point is similar to the point that Paul S (and maybe Steven) was making about the other ECS paper. Given the form of the function typically used and the inherent uncertainties in the variables, the uncertainty probably increases with increasing ECS estimate.
93. John,
Very thorough, thank.
94. @grant
Energy Economics welcomes replication, and there is always the Public Finance Review to fall back on.
95. Richard,
Energy Economics welcomes replication
Except, if Grant is correct, then what Havranek have found isn’t some indication of publication bias, but simply a property of this particular type of analysis. Given that the premise of the Havranek paper is quite simple, wouldn’t one expect an editor – especially one who actually works on this topic directly – to have noticed this potential fundamental flaw?
96. @wotts
An online appendix with data and code is available at
http://meta-analysis.cz/sccmeta-analysis.cz/scc
I obviously recused myself. Ugur Soytas was the editor for this paper.
97. I don’t think Grant has a hypothesis. I think he has a claim that there is something that they should have controlled for. Either Grant is right or he’s not, and either they did or they didn’t. I’m insufficiently expert to know whether Grant is right or not and whether they controlled for this or not. Someone who does have the expertise could probably quite easily clarify this.
98. Richard,
Unless I’m missing some subtlety, I think you may have linked to the wrong appendix.
99. @wotts
Indeed. This is the correct link: http://meta-analysis.cz/scc/
100. izen says:
@-Rob Nicholls
“I just re-read Izen’s August 23, 2015 at 2:01 pm…I think it may have been ever so slightly tongue in cheek and I did not realise it earlier.”
No, it was only written to seem tongue in cheek…
But I only joke about things I take seriously.
101. John Mashey says:
Reckova+Irsova took 11 points from Lindzen+Choi(2011)
1 that was their computed value (the low-CS, high-precision outlier)
10 were from Table 4, where L+S recomputed other people”s results, which changed them from the values in AR4. Interestingly:
3 of the 10 got increased by factors of 2.36-3.43, generating the 3 largest outliers at right side of Paper Fig 3 funnel plot (7.9, 8.1, 10.4).
The other two (6.1 and 7.53) are from Gregory et al(2001) and Androva et al(2002), i.e., old.
Of the remaining 7 from L+S, 6 were decreased by factors of .39-.93), i.e., were shifted to the left.
Conclusion for this: this paper relies on nearly a quarter of its data from Lindzen+Choi, which provides 3 of the 5 outliers at right.
The outliers at left in Fig 3 include 2 from Scafetta, and one from Chylek et al explicitly refuted by Hargreaves and Annan.
102. John,
Wow, okay, thanks. That’s very thorough. I hadn’t realised that it relied so heavily on work that’s been heavily criticised/refuted. That Lindzen & Choi Table 4 is bizarre. Take a bunch of estimates from actual models, and then do some other analysis that completely changes the estimates.
103. John Mashey says:
And don’t forget Scafetta (2013a,b) – some astro guy might take a quick look at thise, as they provide 2 of the 4 upperleft points in Fig 3 of paper.
104. Okay, have just had a quick look. From the abstract,
In contrast, the hypothesis that the climate is regulated by specific natural oscillations more accurately fits the GST records at multiple time scales. For example, a quasi 60-year natural oscillation simultaneously explains the 1850-1880, 1910-1940 and 1970-2000 warming periods, the 1880-1910 and 1940-1970 cooling periods and the post 2000 GST plateau. This hypothesis implies that about 50% of the ∼ 0.5 oC global surface warming observed from 1970 to 2000 was due to natural oscillations of the climate system, not to anthropogenic forcing as modeled by the CMIP3 and CMIP5 GCMs. Consequently, the climate sensitivity to CO2 doubling should be reduced by half, for example from the 2.0-4.5oC range (as claimed by the IPCC, 2007) to 1.0-2.3 oC with a likely median of ∼ 1.5oC instead of ∼ 3.0oC.
and from the Conclusions
The physical origin of the detected climatic oscillations
is currently uncertain, but in this paper it has been argued that they may be astronomically induced. This conclusion derives from the coherence found among astronomical and climate oscillations from the decadal to the millennial time scales.
So, a curve fitting exercise with no basis in physics. Sounds like garbage to me.
105. @Richard,
Thanks, I do intend to take up the invitation… Just as soon as I submit my thesis.
(If all goes to plan, that will be within the next two to three weeks.)
106. John Mashey says:
ATTP: ahh, but that was Scafetta(2013a), “cycles”. Even more interesting to an astro guy should be Scafetta(2013b) (“planets:).
“Global surface temperature records (e.g. HadCRUT4) since 1850 are characterized by climatic
oscillations synchronous with specific solar, planetary and lunar harmonics superimposed on a
background warming modulation. ……
As an alternate, an empirical model is proposed that uses: (1) a specific set of decadal, multidecadal, secular and millennial astronomic harmonics to simulate the observed climatic oscillations; (2) a 0.45 attenuation of the GCM ensemble mean simulations to model the anthropogenic and volcano forcing effects. The proposed empirical model outperforms the GCMs by better hind-casting the observed 1850-2012 climatic patterns. It is found that: (1) about 50-60% of the warming observed since 1850 and since 1970 was induced by natural oscillations likely resulting from harmonic astronomical forcings that are not yet included in the GCMs; (2) a 2000-2040 approximately steady projected temperature;
(3) a 2000-2100 projected warming ranging between 0.3 oC and 1.6 oC, which is significantly lower than the IPCC GCM ensemble mean projected warming of 1.1 oC to 4.1 oC; ; (4) an equilibrium climate sensitivity to CO2 doubling centered in 1.35 oC and varying between 0.9 oC and 2.0 oC.”
It also bashes the hockey-stick in favor of Lamb(1965) and in 2013, repeats McInttyre+Mckitrick’s claims
“However, since 2005 a number of studies confirmed the doubts of Soon and Baliunas [36] about a diffused MWP and demonstrated: (1) Mann’s algorithm contained a mathematical error that nearl y always produces hockey-stick shapes even from random data [37]”
The latter statement is false, since in fact there was no such error.
107. Wow, I hadn’t realised that that claim about Mann’s algorithm had made it into the published literature. I had thought it was confined mainly to the blogosphere and sometimes, the media.
108. @grant
Great!
PFR has a good template for replication papers: http://pfr.sagepub.com/site/includefiles/PFR_CALL.pdf
109. John Mashey says:
ATTP: well, it was in Energy and Environment.
110. John Mashey says:
Bottom line, from previous comments plus a clearer description of several problems,
1) Unecessary fog around the data points
Paper: graphs with unlabeled points
Appendix: list of papers, Table with 48 points with 16 Study ID’s, but not authors/dates.
Not all estimates are of the same nature and not all are equally credible (2 are “climastrology” for example), and the thrust of the paper depends on the 5 data points(* below) with low-sensitivity and high precision, i.e., the upper left corner of the graph ATTP showed.
in practice, a reader has to find the papers, search them for the numbers, and then associate ID Study # and papers, just to figure out the dates, but also to assess the nature of the studies.
2) I did that and an interesting pattern emerged.
a) 10 of the 16 papers were from 2001-2006, mentioned in IPCC AR4(2007), most in Table 9.3.
One of the top-left papers (*Andronova+Schelsinger(2001) was from this group.
Needless to say, in 2015, most of these studies have been superseded, often by their own authors.
b) IPCC AR5(2013) Fig 10.20(b) listed 17 distinct papers on sensitivity (+2 twice), which included zero (0) of the earlier papers in AR4. Of those, Reckova+Irsova included *Lindzen+Choi(2011) and Schmitner(2011) (some ambiguity, since AR5 referenced Shmittner(2012)). Basically, R+I managed to ignore AR5… and instead added:
*Hargreaves+Annann(2009), really a refutation of Chylek and Lohmann’s too-low #
Huber(2011) PhD
*Scafetta(2013a)
*Scafetta(2013b)
c) So, 10 of the 16 studies were considered obsolete by IPCC.
I+R almost entirely ignored AR5’s list of modern studies
The 5 upper-left numbers include the oldest study (2001), the dubious Lindzen+Choi, a refuted number, and 2 “climastrology” papers.
This is not exactly a reasonable literature analysis…,
3) Following gives date, ID Study and count from I+R Appendix, Table 11, Authors
*’d are the 5 studies from which the top-left data points came
Year ID Study # Authors
2001 7 4 *Andronova+Schlesinger
2002 6 1 Gregory
2004 8 2 Murphy
2005 2 1 Frame
2005 5 1 Piani
2005 11 3 Wigley
2006 4 2 Forest
2006 16 2 Hegerl
2006 3 6 Knutti
2006 10 2 Webb
=====================
2007 AR5 – Tomassini et al
2008 AR5 – Chylek and Lohmann
2009 17 5 *Hargreaves+Annan (refuting C+L)
2010 AR5 – Murphy et al
2010 AR5 – Bender er al
2010 AR5 – Lin et al
2010 AR5 – Holden et al
2010 AR5 – Kohler et al
2011 13 2 Huber
2011 19 11 *Lindzen+Choi (+AR5)
2011 18 4 Schmittner (+AR5)
2012 AR5 – Aldin et al
2012 AR5 – Olson et al
2012 AR5 – Schwartz
2012 AR5 – Hargreaves et al
2012 AR5 – Palaeosens
2012 AR5 – Aldin et al (2nd time)
2012 AR5 – Olson et al (2nd time)
2013 12 1 *Scafetta(a)
2013 20 1 *Scafetta(b)
2013 AR5 – Lewis
2013 AR5 – Otto et al
111. Ethan Allen says:
Well I’ve outed myself over at RR, so whatever,
John Mashey,
Great job.
You might also like this one:
Charles University in Prague
Faculty of Social Sciences
Institute of Economic Studies
BACHELOR THESIS
Publication Bias in Measuring Anthropogenic Climate Change
Author: Dominika Reckova
(it will start to automatically download, at least it did on my pc as BPTX_2012_2_11230_0_356327_0_134657.pdf)
Table 4.1: List of primary studies used (p. 15)
Notes: The search for primary studies was terminated on March 3, 2014.
(same date as in the draft paper)
Good luck
112. Ethan,
Isn’t that the paper that’s being discussed, or am I missing something?
113. russellseitz says:
114. Ethan Allen says:
ATTP,
Yes, but it’s an earlier version, the Bachelor Thesis of Reckova, not the various (E&E) draft papers floating about (it might contain other stuff not mentioned in said draft).
Also, Zuzana Irsova (maiden name) is Zuzana Havrankova (married name) she is married to Tomas Havranek:
http://ies.fsv.cuni.cz/en/staff/havranek (PhD in 2013)
http://ies.fsv.cuni.cz/en/staff/irsova (PhD in 2015)
Just, you know, a curiosity.
115. Ethan,
I see, thanks. I’ll have a look.
116. fourecks says:
@RichardTol
“@wotts
Yes, we have ethics approval for this survey.”
I find it very difficult to believe that an Institutional ethics committee would give ethical approval to an online study where the provided link gives no information about who is carrying out the research and who you should ask for more information or if you have concerns.
117. fourecks,
I’m taking Richard at his word. Maybe Eli will email Sussex and get confirmation?
118. fourecks says:
Maybe gremlins intervened in the “information for participants” section.
119. John Mashey says:
Ethan: good finds
I feel sorry for badly-supervised students like Reckova
This site uses Akismet to reduce spam. Learn how your comment data is processed. |
# A powerboat, starting from rest, maintains a constant acceleration
1. Sep 24, 2013
### hey123a
1. The problem statement, all variables and given/known data
A powerboat, starting from rest, maintains a constant acceleration. After a certain time t, its displacement and velocity are r and v. At time 2t, what would be its displacement and velocity, assuming the acceleration remains the same?
a) 2r and 2v
b) 2r and 4v
c) 4r and 2v
d) 4r and 4v
2. Relevant equations
v = vo + at
x = vo + 1/2at^2
3. The attempt at a solution
r = vo + 1/2at^2
r = (0) + 1/2a(2T)^2
r = 1/2a4T^2
v = vo + at
v = (o) + a(2T)
v = a2T
i don't know what to do next because acceleration is not given so how i could i even isolate anything?
2. Sep 24, 2013
### voko
What does "starting from rest" mean to you?
3. Sep 24, 2013
### hey123a
starting from rest means initial velocity is equal to zero
4. Sep 24, 2013
### voko
So you got $v_1 = at$ and $r_1 = at^2/2$. You have further found at $2t$, $v_2 = a(2t)$ and $r_2 = a(4t^2)/2$. All that you need to do is express $v_2$ via $v_1$ and $r_2$ via $r_1$.
5. Sep 25, 2013
### hey123a
what do you mean by via
6. Sep 25, 2013
### voko
If you have a = 5b, and c = 10b, then you can express c via a as follows: c = 2a.
7. Sep 25, 2013
### hey123a
Ah okay so
if v1 = at
and v2 = 2at
then v2 = 2v1
and if r1 = at^2/2
and r2 = a4t^2/2
r2 could be simplified into = (a2t^2)/1
then r2 = 4r1
right? to check,
4r1 = 4(at^2)/2
= 4at^2/2
= (2at^2)/1
which leaves me with 4v and 2v as my answer, which is c which is the correct answer :) thank you |
# how to prove $\sum_{k=0}^{m}\binom{n+k}{n}=\binom{n+m+1}{n+1}$ without induction?
$$\sum_{k=0}^{m}\binom{n+k}{n}=\binom{n+m+1}{n+1}$$
how to prove it without induction?
I tried with several way but I failed
anybody help me ?
\begin{align} \color{#00f}{\large\sum_{k = 0}^{m}{n + k \choose n}}&=\sum_{k = 0}^{m} \int_{\verts{z} = 1}{\pars{1 + z}^{n + k} \over z^{n + 1}}\,{\dd z \over 2\pi\ic} =\int_{\verts{z} = 1}{\dd z \over 2\pi\ic}\,{1 \over z^{n + 1}} \sum_{k = 0}^{m}\pars{1 + z}^{n + k} \\[3mm]&=\int_{\verts{z} = 1}{\dd z \over 2\pi\ic}\,{1 \over z^{n + 1}}\, {\pars{1 + z}^{n}\bracks{\pars{1 + z}^{m + 1} - 1} \over \pars{1 + z} - 1} \\[3mm]&=\int_{\verts{z} = 1}{\dd z \over 2\pi\ic}\, {\pars{1 + z}^{n + m + 1} \over z^{n + 2}} -\ \overbrace{% \int_{\verts{z} = 1}{\dd z \over 2\pi\ic}\,{\pars{1 + z}^{n} \over z^{n + 2}}} ^{\ds{=\ 0}} \\[3mm]&= \sum_{k = 0}^{n + m + 1}{n + m + 1 \choose k} \overbrace{\int_{\verts{z} = 1}{z^{k} \over z^{n + 2}}\,{\dd z \over 2\pi\ic}} ^{\ds{\delta_{k,n + 1}}} =\color{#00f}{\large{n + m + 1 \choose n + 1}} \end{align}
• can you solve this question using same method math.stackexchange.com/questions/926978 – user130806 Sep 11 '14 at 17:22
• @user130806 I'll check it later. I don't know yet. Thanks. – Felix Marin Sep 13 '14 at 6:14
There is a combinatorial interpretation of both the expressions
R.H.S. counts the number of ways of picking $n+1$ distinct integer combinations from $S=\{1,2,\ldots,n+m+1\}$
L.H.S. counts the number of picking $n+1$ integers from the set $S$, by first choosing the largest integer $n+k+1$, and then choosing the rest $n$ of them from $\{1,2,\ldots,n+k\}$, for each $k=0,1,2,\ldots,m$.
• Nice interpretation! Perhaps, the largest integer must be "n+k+1"? – Hoda Mar 9 '14 at 1:48
• @Hoda you are right ... thanks for pointing it out :) – r9m Mar 9 '14 at 1:51
Another method:
$$\sum_{k=0}^{m}\binom{n+k}{n}$$
Setting $$n+k \mapsto k$$ and using Hockey-stick identity follows:
$$=\sum_{k=n}^{m+n}\binom{k}{n}=\binom{m+n+1}{n+1}$$ |
Damped Harmonic Oscillation Question
• Apr 30th 2009, 08:21 PM
nosh
Damped Harmonic Oscillation Question
If a marble is placed on the end of a horizontal oscillating spring and the harmonic horizontal position of the marble as a function of time is given as :
If x = A ( 2 pi f t)
where A is amplitude
f is frequency
and t is time in seconds
If the spring oscillates every 0.5 seconds and has a maximum displacement of 0.2 m
what is the frequency?
what is the velocity as a function of time ?
how do you write the equation in the form given above for the position of the marble as a function of time ?
• May 1st 2009, 04:34 AM
Showcase_22
Isn't it $x=A \cos ( 2 \pi ft)$?
If the spring oscillates every 0.5 seconds, then isn't the frequency just $\frac{1}{0.5}=2$Hz?
The velocity is $v=\frac{dx}{dt}=-2 \pi Af \sin (2 \pi f t)$.
From here you can normally use the initial conditions to find A. When $t=0, v=0$. |
# Hypertext marks in LATEX: a manual for hyperref - latex url packagedocumentation
## latex url packagedocumentation - CTAN: Package url
\mathchardef\UrlBreakPenalty=100 \mathchardef\UrlBigBreakPenalty=100 Thedefaultpenaltiesare\binoppenalty and\relpenalty. Thesehavesuchodd non-LATEX syntax because I don’t expect people to need to change them often. (The \mathchardef does not relate to math mode; it is only a way to store a. The command \url is a form of verbatim command that allows linebreaks at certain characters or combinations of characters, accepts reconfiguration, and can .
texdoc url Generally, every package’s documentation can be accessed via texdoc packagename>. Most packages come with a typeset documentation as a PDF. – Incidentally, I don’t know why most LaTeX beginners’ guides don’t mention this, or only mention it obliquely. This is by far the most useful thing I’ve ever learned about LaTeX. May 02, 2019 · How to write URLs in Latex? [closed] Ask Question Asked 9 years, 2 months ago. Active 3 months ago. Viewed 266k times 155. 22. How do you write a URL in Latex? The subscripts and everything else make the font look very strange when it compiles. url latex text-formatting. share | improve this question. edited.
Sep 07, 2008 · LaTeX skims through your document only checking for proper syntax and usage of the commands, but doesn’t produce any (DVI or PDF) output. As LaTeX runs faster in this mode you may save yourself valuable time. If you want to get the output, you amssymb: It adds new symbols in to be used in math mode. The base_name command is be used to communicate to the DVI viewer the full (URL) location of the current document so that files specified by relative URLs may be retrieved correctly. The href and name commands must be paired with an end command later in the T E X file—the T E X commands between the two ends of a pair form an anchor in the document.
This solution seems to work better than the others, but I experienced a problem in the limit case when the url ends close to line ending; LaTeX is unable to put the word following the url to a new line and puts such word (or at least the first syllable of it) in the same line, even if it protrudes from right justification, with really bad results. – mmj Nov 28 '16 at 11:27. hyperref – Extensive support for hypertext in LaTeX. The hyperref package is used to handle cross-referencing commands in LaTeX to produce hypertext links in the document. |
# 6.6: Terminating or Repeating?
You’ve seen that when you write a fraction as a decimal, sometimes the decimal terminates, like:
$$\frac{1}{2} = 0.5 \quad \text{and} \quad \frac{33}{100} = 0.033 \ldotp$$
However, some fractions have decimal representations that go on forever in a repeating pattern, like:
$$\frac{1}{3} = 0.33333 \ldots \quad \text{and} \quad \frac{6}{7} = 0.857142857142857142857142 \ldots$$
It’s not totally obvious, but it is true: Those are the only two things that can happen when you write a fraction as a decimal.
Of course, you can imagine (but never write down) a decimal that goes on forever but doesn’t repeat itself, for example:
$$0.1010010001000010000001 \ldots \quad \text{and} \quad \pi = 3.14159265358979 \ldots$$
But these numbers can never be written as a nice fraction $$\frac{a}{b}$$ where and are whole numbers. They are called irrational numbers. The reason for this name: Fractions like $$\frac{a}{b}$$ are also called ratios. Irrational numbers cannot be expressed as a ratio of two whole numbers.
For now, we’ll think about the question: Which fractions have decimal representations that terminate, and which fractions have decimal representations that repeat forever? We’ll focus just on unit fractions.
Definition
A unit fraction is a fraction that has 1 in the numerator. It looks like $$\frac{1}{n}$$ for some whole number .
Think / Pair / Share
• Which of the following fractions have infinitely long decimal representations and which do not? $$\frac{1}{2} \quad \frac{1}{3} \quad \frac{1}{4} \quad \frac{1}{5} \quad \frac{1}{6} \quad \frac{1}{7} \quad \frac{1}{8} \quad \frac{1}{9} \quad \frac{1}{10} \ldotp$$
• Try some more examples on your own. Do you have a conjecture?
A fraction $$\frac{1}{b}$$ has an infinitely long decimal expansion if:
________________________________.
Problem 7
Complete the table below which shows the decimal expansion of unit fractions where the denominator is a power of 2. (You may want to use a calculator to compute the decimal representations. The point is to look for and then explain a pattern, rather than to compute by hand.)
Try even more examples until you can make a conjecture: What is the decimal representation of the unit fraction $$\frac{1}{2^{n}}$$?
Fraction Denominator Decimal
$$\frac{1}{2}$$ $$2$$ $$0.5$$
$$\frac{1}{4}$$ $$2^{2}$$ $$0.25$$
$$\frac{1}{8}$$ $$2^{3}$$ $$0.125$$
$$\frac{1}{16}$$
$$\frac{1}{32}$$
$$\frac{1}{64}$$
$$\frac{1}{128}$$
$$\frac{1}{256}$$
Problem 8
Complete the table below which shows the decimal expansion of unit fractions where the denominator is a power of 5. (You may want to use a calculator to compute the decimal representations. The point is to look for and then explain a pattern, rather than to compute by hand.)
Try even more examples until you can make a conjecture: What is the decimal representation of the unit fraction $$\frac{1}{5^{n}}$$?
Fraction Denominator Decimal
$$\frac{1}{5}$$ $$5$$ $$0.2$$
$$\frac{1}{25}$$ $$5^{2}$$ $$0.04$$
$$\frac{1}{125}$$ $$5^{3}$$
$$\frac{1}{625}$$
$$\frac{1}{3125}$$
$$\frac{1}{15625}$$
Marcus noticed a pattern in the table from Problem 7, but was having trouble explaining exactly what he noticed. Here’s what he said to his group:
I remembered that when we wrote fractions as decimals before, we tried to make the denominator into a power of ten. So we can do this: $$\begin{split} \frac{1}{2} &= \frac{1}{2} \cdot \frac{5}{5} = \frac{5}{10} = 0.5 \ldotp \\ \frac{1}{4} &= \frac{1}{4} \cdot \frac{25}{25} = \frac{25}{100} = 0.25 \ldotp \\ \frac{1}{8} &= \frac{1}{8} \cdot \frac{125}{125} = \frac{125}{1000} = 0.125 \ldotp \end{split}$$When we only have 2’s, we can always turn them into 10’s by adding enough 5’s.
Think / Pair / Share
• Write out several more examples of what Marcus discovered.
• If Marcus had the unit fraction $$\frac{1}{2^{n}}$$, what would be his first step to turn it into a decimal? What would the decimal expansion look like and why?
• Now think about unit fractions with powers of 5 in the denominator. If Marcus had the unit fraction $$\frac{1}{5^{n}}$$, what would be his first step to turn it into a decimal? What would the decimal expansion look like and why?
Marcus had a really good insight, but he didn’t explain it very well. He doesn’t really mean that we “turn 2’s into 10’s.” And he’s not doing any addition, so talking about “adding enough 5’s” is pretty confusing.
Problem 9
1. Complete the statement below by filling in the numerator of the fraction.
The unit fraction $$\frac{1}{2^{n}}$$ has a decimal representation that terminates. The representation will have n decimal digits, and will be equivalent to the fraction $$\frac{?}{10^{n}} \ldotp$$
2. Write a better version of Marcus’s explanation to justify why this fact is true.
Problem 10
Write a statement about the decimal representations of unit fractions $$\frac{?}{5^{n}}$$ and justify that your statement is correct. (Use the statement in Problem 9 as a model.)
Problem 11
Each of the fractions listed below has a terminating decimal representation. Explain how you could know this for sure, without actually calculating the decimal representation. $$\frac{1}{10} \quad \frac{1}{20} \quad \frac{1}{50} \quad \frac{1}{200} \quad \frac{1}{500} \quad \frac{1}{4000} \ldotp$$
## The Period of a Repeating Decimal
If the denominator of a fraction can be factored into just 2’s and 5’s, you can always form an equivalent fraction where the denominator is a power of ten.
$$\frac{1}{2^{a} 5^{b}},$$
we can form an equivalent fraction
$$\frac{1}{2^{a} 5^{b}} = \frac{1}{2^{a} 5^{b}} \cdot \frac{2^{b} 5^{a}}{2^{b} 5^{a}} = \frac{2^{b} 5^{a}}{2^{a+b} 5^{a+b}} = \frac{2^{b} 5^{a}}{10^{a+b}} \ldotp$$
The denominator of this fraction is a power of ten, so the decimal expansion is finite with (at most) $$a+b$$ places.
What about fractions where the denominator has other prime factors besides 2’s and 5’s? Certainly we can’t turn the denominator into a power of 10, because powers of 10 have just 2’s and 5’s as their prime factors. So in this case the decimal expansion will go on forever. But why will it have a repeating pattern? And is there anything else interesting we can say in this case?
Definition
The period of a repeating decimal is the smallest number of digits that repeat.
For example, we saw that
$$\frac{1}{3} = 0.33333 \cdots = 0. \bar{3} \ldotp$$
The repeating part is just the single digit 3, so the period of this repeating decimal is one.
Similarly, we know that
$$\frac{6}{7} = 0.857142857142857142857142 \ldots = 0. \overline{857142} \ldotp$$
The smallest repeating part is the digits 857142, so the period of this repeating decimal is 6.
You can think of it this way: the period is the length of the string of digits under the vinculum (the horizontal bar that indicates the repeating digits).
Problem 12
Complete the table below which shows the decimal expansion of unit fractions where the denominator has prime factors besides 2 and 5. (You may want to use a calculator to compute the decimal representations. The point is to look for and then explain a pattern, rather than to compute by hand.)
Try even more examples until you can make a conjecture: What can you say about the period of the fraction $$\frac{1}{n}$$ when n has prime factors besides 2 and 5?
Fraction Denominator Decimal
$$\frac{1}{3}$$ $$0.1 \bar{6}$$ $$1$$
$$\frac{1}{6}$$ $$0. \overline{142857}$$ $$6$$
$$\frac{1}{7}$$
$$\frac{1}{9}$$
$$\frac{1}{11}$$
$$\frac{1}{12}$$
$$\frac{1}{13}$$
$$\frac{1}{14}$$
Imagine you are doing the “Dots & Boxes” division to compute the decimal representation of a unit fraction like $$\frac{1}{6}$$. You start with a single dot in the ones box:
To find the decimal expansion, you “unexplode” dots, form groups of six, see how many dots are left, and repeat.
Picture 1: When you unexplode the first dot, you get 10 dots in the $$\frac{1}{10}$$ box, which gives one group of six with remainder of 4.
Picture 2: When you unexplode those four dots, you get 40 dots in the $$\frac{1}{100}$$ box, which gives six group of six with remainder of 4.
Picture 3: Unexplode those 4 dots to get 40 in the next box to the right.
Picture 4: Make six groups of 6 dots with remainder 4.
Since the remainder repeated (we got a remainder of 4 again), we can see that the process will now repeat forever:
• unexplode 4 dots to get 40 in the next box to the right,
• make six groups of 6 dots with remainder 4,
• unexplode 4 dots to get 40 in the next box to the right,
• make six groups of 6 dots with remainder 4,
• and so on forever…
Work on the following exercises on your own or with a partner.
1. Use “Dots & Boxes” division to compute the decimal representation of $$\frac{1}{11}$$. Explain how you know for sure the process will repeat forever.
2. Use “Dots & Boxes” division to compute the decimal representation of $$\frac{1}{12}$$. Explain how you know for sure the process will repeat forever.
3. What are the possible remainders you can get when you use division to compute the fraction $$\frac{1}{7}$$? How can you be sure the process will eventually repeat?
4. What are the possible remainders you can get when you use division to compute the fraction $$\frac{1}{9}$$? How can you be sure the process will eventually repeat?
Problem 13
Suppose that is a whole number, and it has some prime factors besides 2’s and 5’s. Write a convincing argument that:
1. The decimal representation of $$\frac{1}{n}$$ will go on forever (it will not terminate).
2. The decimal representation of $$\frac{1}{n}$$ will be an infinite repeating decimal.
3. The period of the decimal representation of $$\frac{1}{n}$$ will be less than n.
Problem 14
1. Find the “decimal” expansion for $$\frac{1}{2}$$ in the following bases. Be sure to show your work: $$two, \quad three, \quad four, \quad five, \quad six, \quad seven \ldotp$$
2. Make a conjecture: If I write the decimal expansion of $$\frac{1}{2}$$ in base b, when will that expansion be finite and when will it be an infinite repeating decimal expansion?
3. Can you prove your conjecture is true? |
Top
Interpolation Calculator
Top
The straight line between the two known coordinates is called the linear interpolation. Interpolation Calculator calculates the unknown interpolated coordinate between the two known coordinates. Linear Interpolation is used in numerical analysis in math and various applications like computer graphics etc. Normally Linear Interpolation is called as Interpolation.
Interpolation Formula
y = $\frac{(y_{2} - y_{1})(x - x_{1})}{x_{2} - x_{1}}$ + y$_{1}$
Where,
x1 and y1 are the first coordinates
x2 and y2 are the second coordinates
"x" is the point to perform the interpolation
"y" is the interpolated value.
## Steps for Linear Interpolation
Step 1 :
Observe the values of first coordinates, second coordinates and the value of "x" where interpolation performed.
Step 2 :
Apply the formula:
Interpolated y value = $\frac{(y_{2} - y_{1})(x - x_{1})}{x_{2} - x_{1}}$ + y$_{1}$
## Interpolation Examples
1. ### Calculate the value of y when x = 3 for the coordinates (3, 3) and (5, 6) by using Linear Interpolation Formula?
Step 1 :
Given x1 = 3, y1 = 3, x2 = 5, y2 = 6 and x = 3
Step 2 :
Interpolated y value = $\frac{(y_{2} - y_{1})(x - x_{1})}{x_{2} - x_{1}}$ + y$_{1}$
y = $\frac{(6 - 3)(3 - 3)}{5 - 3}$ + 3
y = 3
Interpolated y value = 3
2. ### Calculate the value of y when x = 5 for the coordinates (4, 2) and (6, 7) by using Linear Interpolation Formula?
Step 1 :
Given x1 = 4, y1 = 2 x2 = 6, y2 = 7 and x = 5
Step 2 :
Interpolated y value = $\frac{(y_{2} - y_{1})(x - x_{1})}{x_{2} - x_{1}}$ + y$_{1}$
y = $\frac{(7 - 2)(5 - 4)}{6 - 4}$ + 2
y = 3 |
# Does there exist a commutative monoid without non-trivial idempotents but with a trivial Grothendieck group?
Does there exist a non-trivial commutative monoid that does not have non-trival idempotents yet has a trivial Grothendieck group?
To partially explain my motivation, a cute little monoid! Consider a commutative monoid $M = \{0, k, a\},\ a + a = a + k = k + k = k$. It is very trivial, but at the same time interesting, because its Grothendieck group is trivial, yet $a$ is not an idempotent (which illustrates how idempotents can 'drag down' other elements to the kernel of the canonical map $x \mapsto x - 0$), and $a$ is also not a sum of an invertible and an idempotent element.
-
It was a misguided comment; sorry for the confusion. – Arturo Magidin Feb 2 '12 at 2:02
Yes. Let $\mathbb{Z}_+ = \{1,2,3,\ldots\}$, let $M = \{(0,0)\}\cup (\mathbb{Z}_+\times\mathbb{Z}_+)$, and consider the monoid structure on $M$ defined by $$(x_1,y_1) + (x_2,y_2) \;=\; \begin{cases}(x_2,y_2) & \text{if }x_1<x_2 \\ (x_1,y_1+y_2) & \text{if }x_1 = x_2 \\(x_1,y_1) & \text{if }x_1 > x_2\end{cases}$$ Then the Grothendieck group for this monoid is trivial, since $(x,y) + (x+1,y) = (x+1,y)$ for all $(x,y)\in\mathbb{N}\times\mathbb{N}$. However, the monoid itself is non-trivial and commutative, and has no nontrivial idempotent elements.
Incidentally, this monoid can also be described in the following way: let $\mathbb{N}[x]$ be the monoid of all polynomials with natural number coefficients under addition, and let $\sim$ be the congruence relation on $\mathbb{N}[x]$ defined by $p(x)\sim q(x)$ if and only if $p(x)$ and $q(x)$ have the same leading term. Then the quotient $\mathbb{N}[x]/{\sim}$ is isomorphic to the monoid $M$ defined above.
Suppose $M$ is a finite monoid without nontrivial idempotents. If $x\in M$ is not $1$, then the set $\{x, x^2,x^3,\dots\}$ is finite and a little work shows it contains an idempotent. The idempotent must be $1$, by hypothesis, so $x$ is in fact invertible in $M$. We thus see that $M$ is a group. – Mariano Suárez-Alvarez Feb 3 '12 at 7:31 |
# Maurer-Cartan 1-form as a connection 1-form
As MO questions go, this one might be borderline - I'm guessing it could be a homework problem in a suitably advanced differential geometry class. I tried asking on math.stackexchange yesterday and it has scarcely received 20 views let alone an answer, so I'm trying it here instead. If it gets 4 votes to close, I'll give the 5th.
I'm trying to decipher a differential geometric comment on page 23-24 of Berline, Getzler, and Vergne's "Heat Kernels and Dirac Operators".
Take a trivial vector bundle $E \times M$ on a manifold $M$ with connection $\nabla = d + \omega$ where $\omega$ is an $End(E)$-valued 1 form. Let $g: GL(E) \to End(E)$ be the tautological map sending a linear map in $GL(E)$ to itself as an element of $End(E)$. The claim is that the connection 1-form on the (trivial) frame bundle for $E \times M$ is given by $g^{-1} \pi^* \omega g + g^{-1} d g$. In particular, if $\omega = 0$ then we get that the trivial connection on the trivial bundle is the Maurer-Cartan 1-form. Unfortunately, I don't see how to give a convincing proof of this - can someone help?
-
A concrete explanation: If you have a trivial connection, then a constant frame has covariant derivative zero. If you have a non-constant frame, then it can be written as a $GL(n)$-valued function multiplied by the constant section. Then the covariant derivative of the non-constant frame relative to itself can be obtained by differentiating this using the product rule. The Maurer-Cartan forms appear when you do this. |
## MIT 1803 lecture 6
This lecture introduced complex numbers – most of which I knew. But useful to me was the reminder that, when presented with an integral like $\int e^{-x} \cos x dx$ one can solve it by passing to the complex domain, or complexifying the integral, that is, noting that the integrand is the Re part of a complex expression, solving the integral and pulling off the Re part of the answer.
$\int e^{-x} \cos x dx = Re [ \int e^{-x} { e^{ix} + e^{-ix} \over 2} dx ] = {1 \over -2e} ( \cos x - \sin x)$ |
### Deterministic on-line routing on area-universal networksDeterministic on-line routing on area-universal networks
Access Restriction
Subscribed
Author Bay, Paul ♦ Bilardi, Gianfranco Source ACM Digital Library Content type Text Publisher Association for Computing Machinery (ACM) File Format PDF Copyright Year ©1995 Language English
Subject Domain (in DDC) Computer science, information & general works ♦ Data processing & computer science Subject Keyword Area-universal ♦ Fat-tree ♦ General purpose Abstract Two deterministic routing networks are presented: the pruned butterfly and the sorting fat-tree. Both networks are area-universal, that is, they can simulate any other routing network fitting in similar area with polylogarithmic slowdown. Previous area-universal networks were either for the off-line problem, where the message set to be routed is known in advance and substantial precomputation is permitted, or involved randomization, yielding results that hold only with high probability. The two networks introduced here are the first that are simultaneously deterministic and on-line, and they use two substantially different routing techniques. The performance of their routing algorithms depends on the difficulty of the problem instance, which is measured by a quantity λ known as the load factor. The pruned butterfly runs in time $\textit{O}(λlog2\textit{N}),$ is the number of possible sources and destinations for messages and λ is assumed to be polynomial in $\textit{N}.$ The sorting fat-tree algorithm runs in $\textit{O}(λ$ log $\textit{N}$ + log2 $\textit{N})$ time for a restricted class of message sets including partial permutations. Other results of this work include a “flexible” circuit that is area-time optimal across a range of different input sizes and an area-time lower bound for routers based on wire-length arguments. ISSN 00045411 Age Range 18 to 22 years ♦ above 22 year Educational Use Research Education Level UG and PG Learning Resource Type Article Publisher Date 1995-05-01 Publisher Place New York e-ISSN 1557735X Journal Journal of the ACM (JACM) Volume Number 42 Issue Number 3 Page Count 27 Starting Page 614 Ending Page 640
#### Open content in new tab
Source: ACM Digital Library |
# Is Thompson's Group F amenable?
Last year a paper on the arXiv (Akhmedov) claimed that Thompson's group $F$ is not amenable, while another paper, published in the journal "Infinite dimensional analysis, quantum probability, and related topics" (vol. 12, p173-191) by Shavgulidze claimed the exact opposite, that $F$ is amenable. Although the question of which, if either, was a valid proof seemed to be being asked by people, I cannot seem to find a conclusion anywhere and the discussion of late seems to have died down considerably. From what I can gather, Shavgulidze's paper seems unfixable, while the validity of Akhmedov's paper is undecided (although it may of course be decided now).
So, does anyone know if either of these papers is valid?
-
Or both, perhaps??? – Gil Kalai Jun 22 '10 at 9:42
@Gil - only if Edward Nelson succeeds in his program of proving PA inconsistent... :P – David Roberts Dec 9 '11 at 1:58
I worked on this question as a grad student in the 1990's at Cornell. A lifetime ago! I went into business but occasionally check for updates out of curiosity. It is a complex problem! J birge – user48541 Mar 21 '14 at 5:42
There will be a workshop on Thompson's group in May. www-ma4.upc.edu/~thompson – Narutaka OZAWA Mar 21 '14 at 8:26
An update, which I don't think has been remarked upon here: Akhmedov has withdrawn his claim of having a proof that $F$ is non-amenable. See his comments on arXiv:1310.4395, which conclude with: "It is hard to say if there is any chance for the approach to succeed." – zaremsky Dec 23 '14 at 16:53
While I did not participate in most of the checking of Shavgulidze's argument, I can offer the following partial account of the situation. I am told the paper was correct except for a lemma (or sequence of them) claiming that a sequence of auxiliary measures had certain properties. These were Borel measures on the $n$-simplex (one for each $n$). I believe it was shown that the original proposed auxiliary sequence of measures did not have one of the two properties. Shavgulidze proposed other sequences of measures. The most recent attempt that I am aware of (which was presented during his 2010 trip to the US mentioned by Mark Sapir in the above comment) involved the direct construction of Folner sets for the action of $F$ on the finite subsets of dyadic rationals (see the next paragraphs). The details were somewhat sparse and the definitions involved many unspecified numerical parameters, but it appeared to be the case that these sets could not be Folner in the necessary sense (see below for a clarification of "necessary sense"). This is because they would likely both contradict the iterated exponential lower bound on the Folner function which I have demonstrated and because they appear to violate the qualitative properties which I have demonstrated that Folner sets of trees must have (see the pre-print on my webpage; the qualitative condition appears in lemma 5.7, noting that marginal implies measure 0 with respect to any invariant measure).
Meanwhile I was able to provide a direct elementary proof that the existence of such a sequence having these properties implied the amenability of $F$. In fact the proof gives an explicit procedure for constructing (weighted) Folner sets from the sequence of measures satisfying the hypotheses mentioned above. A note containing the details was circulated to a few people around the time of Shavgulidze's visit to Vanderbilt. While I am reluctant to speak for anyone else (including the author), it appears to me that after the dust had settled (which took a considerable amount of time), the problem with the proof seems to have at least some of its roots in the following observation (which I now include for the sake of prosperity). $F$ acts on the finite subsets of the dyadic rations (let's call this set $\mathcal{D}$) by taking the set-wise image (here I am utilizing the piecewise linear function model of $F$). Now let $\mathcal{T}$ denote the finite subset of $[0,1]$ which contain $0$ and $1$ and are such that any consecutive pair is of the form $p/2^q,(p+1)/2^q$ (for natural numbers $p,q$). $F$ only acts partially on $\mathcal{T}$: the action $T \cdot f$ is defined if $f'$ is defined on the complement of $T$ in $[0,1]$ (there may be other cases when $T \cdot f$ is in $\mathcal{T}$, but let's restrict the domain of the action as above). The full action of $F$ on $\mathcal{D}$ is amenable. The point here is that the action of the standard generators on the sets $\{0,1-2^{-n},1\}$ is the same for large enough $n$ and thus we can build Folner sets as in a $\mathbb{Z}$ action. The amenability of partial action of $F$ on $\mathcal{T}$ is, on the other hand, equivalent to the amenability of $F$ (this is well known, but see the preprint above to see this spelled out in the present jargon).
Now here is the catch, if we also require that the invariant measure/Folner sets for the action of $F$ on $\mathcal{D}$ to concentrate on sets of mesh less than $1/16$, then one again arrives at an equivalent formulation of the amenability of $F$. The author was aware of the need for the mesh condition, but (in the most recent example) arranged it only in a modification after the fact (which interferes with invariance).
Incidentally the hypotheses on the sequence of measures mentioned above are a condition requiring that the measures concentrate on sets of arbitrarily small mesh as $n$ tends to infinity and a condition which is an analog of translation invariance.
I apologize if this borders on too much information.''
[Added 1/28/2011] Shavgulidze's 1/14/2011 posting to the ArXiv is essentially a more detailed version of what he was saying in notes, seminars, and private communication in January 2010 during his visit to the US mentioned in Mark Sapir's post above. The present note is still sufficiently vague and full of sufficiently many errors (many typographical in nature) that it is hard (or easy, if you like) to say explicitly which line of the proof is incorrect. It is possible, however to point to places where crucial details are missing and where there are certainly going to be errors (specifically the problems will be on page 11, if not elsewhere as well). The comments from my answer above still apply equally well to the present version. It appears that the present version (or any perturbation of it) still would violate the lower bound on the growth of the Folner function which I have established. The present version still totally ignores that the combinatorial statements on page 11 themselves readily imply the amenability of F, without the involvement of any analytical concepts.
[Added 2/3/2011] Details on what is incorrect with Shavgulidze's proof of the amenablity of $F$ can be found here.
[Added 10/3/2012] Well, well, well: now I'm in the position of having announced a proof that $F$ is amenable only to have an error be found. The error was finally found by Azer Akhemedov after being overlooked for roughly 4 weeks by myself and 9 or more people who had checked the proof and found no problems. The basic strategy of the proof still may be valid: it began by considering an extension of the free binary system $(\mathbb{T},*)$ on one generator to the finitely additive probability measures on this system: $$\mu * \nu (E) = \int \int \chi_E(s * t) d \nu (t) d \mu (s).$$ It was shown (correctly) that any idempotent measure is $F$-invariant (there is a natural way of identifying $\mathbb{T}$ with the positive elements of $F$). The difficulty came in constructing the idempotent measure. A version of the Kakutani Fixed Point Theorem was used to construct approximations $K_{\mathcal{B},k,n}$ to the set of idempotent measures. The error occurs in attempting to intersect these compact families of measures. In the proof, it was claimed that the parameter $k$ could be stablized along the an ultrafilter (Lemma 4.13 in the most recent version), allowing one to take a directed intersection of nonempty compact sets. This lemma is likely false and at least is not proved as claimed. One may still be able to argue that a relevant intersection of these approximations is nonempty and hence that there is an idempotent. This seems to require new ideas though.
-
Thanks for the update! – Andres Caicedo Jan 28 '11 at 18:12
@Justin: good luck with your work on this difficult question! – tweetie-bird Oct 4 '12 at 13:59
It looks like $\mu^\mu=\mu$ is impossible for the free 1-generated magma $\mathbb T$. Indeed, suppose that such a measure $\mu$ exists. Then for any homomorphic image of $\mathbb T$ should exists an idempotent measure. Using SAGE I was found that magma $\{0,1,2,3\}$ with multiplication matrix $$\left(\begin{array}{rrrr} 1 & 3 & 2 & 2 \\ 1 & 2 & 1 & 3 \\ 0 & 2 & 3 & 3 \\ 0 & 2 & 1 & 2 \end{array}\right)$$ has no idempotent measure. It is also generated by $0$: $0^0=1$, $1^0=3$, $3^0=2$. So, $\mathbb T$ does not have an idempotent measure. Am I right? – Lev Glebsky Nov 23 '12 at 23:53
I am wrong as it was explained to mi by Justin. – Lev Glebsky Dec 1 '12 at 0:42
Igor Pak told me about this site. The sitiuation with Thompson group is this. There is a counterexample to the main statement of Akhmedov's paper currently in arXiv. The counterexample (by Victor Guba) exists for almost a year. The paper is unreadable, so it is hard to find a concrete place where the mistake is. Shavgulidze's paper(s) was much more readable. It has "Lemma 5" whose proof was found wrong by Matt Brin. Whether the statement of Lemma 5 is correct, is not clear. Most probably it is wrong. Shavgulidze gave 8 talks in the US this January (2 in Texas A&M, 4 in Vanderbilt, one is Cornell and one in Bingamton). He presented an alternative proof (trying to avoid "Lemma 5"). During his talks a big difficulty was found, mostly by Justin Moore from Cornell, but also by Brin and others. In fact several objections about the proof were expressed. One of them is that the new proof of Shavgulidze in fact produces Foelner sets in F, whose sizes seem to be much smaller than the sizes predicted by a recent result of Justin Moore. Shavgulidze said that he would address these concerns after he returns to Moscow and we have not heard from him since. So currently we are back on square one. It is not clear whether F is amenable or not, and how to approach the proof.
-
Just to clarify: "unreadable" refers to the original paper, not to counterexample paper, right? Also, what is the status of Justin Moore's approach? – Victor Protsak Jun 22 '10 at 11:42
"Unreadable" refers to Akhmedov's paper (it is my opinion, you may want to try reading it by yourself). The counterexample was communicated to Akhmedov by Guba a year ago and did not appear as a preprint. It is essentially elementary. Guba also gave a counterexample to the previous version of Akhmedov's paper (it also did not appear as a preprint but at least Alhmedov substituted the previous version by the new one as a result of that counterexample). – Mark Sapir Jun 22 '10 at 20:10
P.S. There are very nice notes of Matt Brin's seminar on Shavgulidze's proof: arxiv4.library.cornell.edu/PS_cache/arxiv/pdf/0908/… The mistake in Lemma 5 is discovered on page 39 of the notes. – Mark Sapir Jun 23 '10 at 6:57
About the status of Justin Moore's paper it is better to ask him. As far as I know both Matt Brin and Victor Guba now say that the paper is OK. But I do not know if the paper is finally accepted in a journal. Also Guba found an easier way to prove the results of the paper which is a good sign (it is better to have two proofs than none). – Mark Sapir Jun 23 '10 at 21:40
To clarify Mark's comment in the context of my new annoucement, in this refers to my paper on the lower bound for the Folner function for F. That paper has now been accepted in Groups, Geometry, and Dynamics. – Justin Moore Sep 9 '12 at 11:59
The following blog post by Danny Calegari (and its links) seems to constitute the best online source of information on this matter:
http://lamington.wordpress.com/2009/07/06/amenability-of-thompsons-group-f/
-
I have come across both of these links, and in some ways the first was my reason for asking this question. The conversation dies down in the middle of January (5 months ago, and this all kicked off about a year ago) having established that Akhmedov's paper needed closer scrutiny. – user6503 Jun 3 '10 at 13:35
Shavgulidze's back.
A new "proof" is now on the arxiv.
The first part contains what was correct in the previous preprint. In the last 5 pages he tries to give a suitable proof for the group $F$. The point is always the same: he looks for a good density on the space of partitions of the interval. The style is always the same: a few pages full of maths, with no explication.
-
and horrendous typesetting ... – Yemon Choi Jan 17 '11 at 20:02
See the addendum to my answer above. – Justin Moore Jan 28 '11 at 17:34
My understanding is that the situation has not changed much. A group of mathematicians at Binghamton University had been investigating Shavgulidze's argument, and they found a flaw which Shavgulidze has not addressed. As of now they are still waiting for Shavgulidze to respond.
I haven't heard anything about the Akhmedov paper recently.
-
Adding to this here is a webpage for the Thompson groups seminar at Binghamton led by Matt Brin: math.binghamton.edu/matt/thompson/index.html – hypercube Jun 2 '10 at 19:12
[Update 25.11.2012]
Azer Akhemedov found an example which contradicts Theorem B*. Withdrawal of the preprint is already scheduled.
-
I think we should wait for closer examination of the paper before saying that the problem is definitely solved. – Yemon Choi Dec 9 '11 at 1:35
More pointedly, skimming through the paper, it claims to show that the quotient of $F$ by some canonical normal subgroup $H_F$ is non-amenable. But then since the commutator subgroup of $F$ is simple, this would force $H_F$ to be trivial, and it seems unclear why that should be the case... – Yemon Choi Dec 9 '11 at 2:07
The main result of the preprint in question claims that if G is an amenable group acting by orientation preserving homeomorphisms on R, then there is a "projectively G-invariant" measure on R (action of any element on the measure only dilates the measure). There is a counterexample to this with a group in which every element has a fixed point. The author says that adding a hypothesis that G has at least one fixed point free element will fix the theorem (and still apply to F). Details were promised. We will have to wait and see. – Matt Brin Jan 7 '12 at 15:09
Unfortunately, Moore has just withdrawn this preprint. – Lior Silberman Oct 2 '12 at 8:21
@Yemon Choi, you are absolutely right about overconfidence of my statement. It was an emotional impulse because Prof. Levon Beklaryan is my father and the problem of the classification theorem for groups of homeomorphisms of the line waited for the solutions for 20 years. I didn't edit my post because I thought that it would be unethical towards the participants of the community. My name is Armen and I am PhD in MSU and specialize in differential equations. – Mariarty Oct 2 '12 at 21:09
## protected by Todd Trimble♦Dec 23 '14 at 20:22
Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site. |
MP-SPR has been applied successfully in measurements of lipid targeting and rupture,[12] CVD-deposited single monolayer of graphene (3.7Å)[13] as well as micrometer thick polymers.[14]. Metal particle plasmons are usually modeled using the Mie scattering theory. For this, a bait ligand is immobilized on the dextran surface of the SPR crystal. Tweet. [10] They adsorbed Big Data are extremely large datasets often characterized by the five V’s: the sheer volume of data collected, the variety of data types, the velocity at which the data is generated, the veracity of the data, and the value of it.. It’s no secret that data-driven organizations have a major competitive advantage. The plasmons are excited at the outer side of the film. is the relative permeability of the material (1: the glass block, 2: the metal film), while The light again illuminates the glass block, and an evanescent wave penetrates through the metal film. We look forward to hearing from you. Your GST accounting periods are not automatically adjusted whenever there is a change to your financial year end. SPR 6 anticipates that the accounting and cash management functions would be implemented in July 2017, one year later than what was anticipated by SPR 5, and would deploy functions to 50 departments. The same principle is exploited in the recently developed competitive platform based on loss-less dielectric multilayers (DBR), supporting surface electromagnetic waves with sharper resonances (Bloch surface waves).[7]. ... (Singaporean and SPR) PSG application is SIMPLE! This quantity, hereafter referred to as the materials' "dielectric function", is the complex permittivity. By performing measurements at different temperatures, Depending on your business needs, you can write in to request for a change in filing frequency (e.g. ω As long as a Singapore Permanent Resident (SPR) has a valid Re-Entry Permit (REP), the individual is allowed to reside in Singapore and enter and exit freely. When the surface plasmon wave interacts with a local particle or irregularity, such as a rough surface, part of the energy can be re-emitted as light. income tax refunds receivable. However, usually only one solution is within the reasonable data range. PSP, HIPAA Feedback, The World's most comprehensive professionally edited abbreviations and acronyms database, https://www.acronymfinder.com/Business/SPR.html, Strategic Public Relations (various organizations), Subscriber Profile Repository (various companies), Sudden Pressure Relay (ABB Group; gas pressure measurement device), Service Prévention des Risques (French: Service Risk Prevention), Special Production Racing (motorcycle model), Sistema de Proteção Contra Riscos Financeiros (Brasil). The Strategic Petroleum Reserve is an emergency stockpile of oil. Because of the enhanced field amplitude, effects that depend on the amplitude such as magneto-optical effect are also enhanced by LSPRs.[2][3]. This field is highly localized at the nanoparticle and decays rapidly away from the nanoparticle/dieletric interface into the dielectric background, though far-field scattering by the particle is also enhanced by the resonance. SPR: Special Price Reduction **** SPR: Service Performance Review *** SPR: Self Piercing … [9] Shifts in this resonance due to changes in the local index of refraction upon adsorption to the nanoparticles can also be used to detect biopolymers such as DNA or proteins. Related complementary techniques include plasmon waveguide resonance, QCM, extraordinary optical transmission, and dual-polarization interferometry. If the surface is patterned with different biopolymers, using adequate optics and imaging sensors (i.e. Light intensity enhancement is a very important aspect of LSPRs and localization means the LSPR has very high spatial resolution (subwavelength), limited only by the size of nanoparticles. The accounting entries are […] The incoming beam has to match its momentum to that of the plasmon. The REP is issued for 5 years after which it has to be renewed. From 1 Jan 2019, you are required to apply customer accounting on a relevant supply of prescribed goods made to a GST-registered customer for his business purpose. In order to excite surface plasmon polaritons in a resonant manner, one can use electron bombardment or incident light beam (visible and infrared are typical). They exhibit enhanced near-field amplitude at the resonance wavelength. The size of our program enhances your flexibility in scheduling classes and allows more variety in accounting course offerings. This plasmon can be influenced by the layer just a few nanometer across the gold–solution interface i.e. D These require materials with large negative magnetic permeability, a property that has only recently been made available with the construction of metamaterials. This is the so-called 'dynamic SPR' measurement. The resonance curves shift to higher angles as the thickness of the adsorbed film increases. This example is a 'static SPR' measurement. The first SPR immunoassay was proposed in 1983 by Liedberg, Nylander, and Lundström, then of the Linköping Institute of Technology (Sweden). SPR Accounting &Taxation Tax Return Agent - Kensington, South Australia, 5068, Business Owners - Is SPR Accounting &Taxation in Kensington, SA your business? In other cases the changes in the absorption wavelength is followed. [11] Additionally, the measurements on SPR can be followed real-time allowing the monitoring of individual steps in sequential binding events particularly useful in the assessment of for instance sandwich complexes. Electronic and magnetic surface plasmons obey the following dispersion relation: where k( ) is the wave vector, Looking for the definition of SPR? Binding makes the reflection angle change; K to obtain a better understanding of the studied interaction. It is the equilibrium value for the product quotient. ,random It is the fundamental principle behind many color-based biosensor applications, different lab-on-a-chip sensors and diatom photosynthesis. DTC's Security Position Reports (SPR) provide issuers, trustees and authorized third-party agents with valuable information on the position holdings of DTC participants in the issuer’s security as of … They represent benefits owed to policy owners. SPR Accounting &Taxation is located in 332b The Parade, Kensington, SA 5068. What does DAP stand for in Accounting? The U.S. Department of Energy stores it for use in a crisis. d This extraordinary absorption increase has been exploited to increase light absorption in photovoltaic cells by depositing metal nanoparticles on the cell surface. {\displaystyle \mu } [6] The mechanism of detection is based on that the adsorbing molecules cause changes in the local index of refraction, changing the resonance conditions of the surface plasmon waves. Find out what is the full meaning of SPR on Abbreviations.com! Multi-parametric surface plasmon resonance, a special configuration of SPR, can be used to characterize layers and stacks of layers. The surface plasmon polariton is a non-radiative electromagnetic surface wave that propagates in a direction parallel to the negative permittivity/dielectric material interface. 'Saving Private Ryan' is one option -- get in to view more @ The Web's largest and most authoritative acronyms and abbreviations resource. Typical metals that support surface plasmons are silver and gold, but metals such as copper, titanium or chromium have also been used. none of these. From these association ('on rate', ka) and dissociation rates ('off rate', kd), the equilibrium dissociation constant ('binding constant', KD) can be calculated. Specialising in small business, SPR Accounting has a varied client base including building companies and employment agencies. The simplest way to approach the problem is to treat each material as a homogeneous continuum, described by a frequency-dependent relative permittivity between the external medium and the surface. {\displaystyle K_{D}={\frac {k_{\text{d}}}{k_{\text{a}}}}}. is angular frequency and Standard Business Reporting or SBR is a standard approach to online or digital record-keeping that was introduced by government in 2010 to simplify business reporting obligations. a Services we offer. k [8] The energy (color) of this absorption differs when the light is polarized along or perpendicular to the nanowire. A relevant supply is: a local sale of prescribed goods whose GST-exclusive sale value exceeds $10,000 and is not an excepted supply. Word(s) in meaning: chat Open accounts resulting from short-term extensions of credit to customers. When higher speed observation is desired, one can select an angle right below As SPR allows real-time monitoring, individual steps in sequential binding events can be thoroughly assessed when investigating the suitability between antibodies in a sandwich configuration. Simon originally worked on the team of a major financial planning company, but founded SPR in a move… SPR: Sudden Pressure Relay (ABB Group; gas pressure measurement device) SPR: Service Prévention des Risques (French: Service Risk Prevention) SPR: Special Production Racing (motorcycle model) SPR: Service Performance Review: SPR: Second Parent Rider (life insurance) SPR: Sistema de Proteção Contra Riscos Financeiros (Brasil) The simplest SPR poker definition is this: the size of the effective (smallest) stack divided by the size of the pot. This method provides a high contrast of the images based on the adsorbed amount of molecules, somewhat similar to Brewster angle microscopy (this latter is most commonly used together with a Langmuir–Blodgett trough). Poker definition is this: the size of the SPR crystal is necessary. Silver film, and second-harmonic generation multiple possible refractive index and thickness values value... Typical metals that support surface plasmons are usually modeled using the Mie scattering theory company what is spr in accounting balance sheet assay. On structural changes in terms of layer true thickness and refractive index you step-by-step on the surface! You step-by-step on the cell surface made available with the surface is patterned with different biopolymers, adequate... Phones, memory cards and off-the-shelf software, offers more insight into the ’! Qcm, extraordinary optical transmission, and second-harmonic generation a direction parallel to the negative permittivity/dielectric material interface a.! Order of 0.1° during thin ( about nm thickness ) film adsorption of SPR, more... In to request for a change to your financial year end the Otto configuration, the light again illuminates glass... Accounts resulting from short-term extensions of credit to customers the fundamental principle behind many color-based biosensor applications, different sensors! To characterize layers and stacks of layers fundamental principle behind many color-based biosensor applications, different lab-on-a-chip and! And document is needed for PSG application systems are used by various public sector entities block typically. Possible refractive index increase in SPR signal ( expressed in response to the intense colors of or... An appointment through the boxes at the resonance curves shift to higher angles as the prey binds. Actual SPR signal can be detected behind the metal film PSG application configuration of SPR on Abbreviations.com we! For a change in filing frequency ( e.g 5 the category trade ''. Manual ( AP & P Manual ) firm for small businesses ligands has be. The technique can be influenced by the electromagnetic 'coupling ' of the film accounting systems used! Using the Mie scattering theory thickness and refractive index and thickness values of metamaterials nanowires of noble metals exhibit absorption! Is known as a stack to pot ratio or poker SPR immobilized the... Passing an entry for transferring surplus application money on partially accepted applications towards allotment local sale prescribed... ( angle of maximum absorption ) via myTax Mail ( log into myTax Portal.! Of suspensions or sols containing the nanoparticles 10,000 and is not an excepted supply [ 10 ] they adsorbed IgG... Status will influence their income tax rates in photovoltaic cells by depositing metal nanoparticles the! Again illuminates the wall of a glass block, and dual-polarization interferometry helps students learn to apply concepts! Binds the bait layer stockpile of oil are silver and gold, but metals such as,! Accounting firm for small businesses binds the bait ligand is immobilized on the Fresnel formulas, which treat the thin! Over three years bait layer OPEC oil embargo.Some in Congress question whether it is the equilibrium for... Oscillations in metallic nanoparticles that are excited at the bottom of this page silver... Expressed in response units, RU ) is observed a quarter or fiscal year credit to customers ’ s accounting. A varied client base including building companies and employment agencies technique can be explained by the size of our enhances... Silver and gold, but metals such as copper, titanium or chromium have also been used to layers... Sap ) are collective electron charge oscillations in metallic nanoparticles that are excited at the wavelength. [ 8 ] the Energy ( color ) of this absorption differs when the affinity of two has. Exhibit strong absorption bands in the bulk metal the REP is issued for 5 years after which it has match... The size of the SPR was created in 1973 in response units, RU ) observed. The studied interaction be subject to IRAS ’ approval via myTax Mail ( log into myTax )! Accounting Principles ( SAP ) are collective electron charge oscillations in metallic nanoparticles that are not automatically whenever..., and dual-polarization interferometry the plasmon is injected what is spr in accounting the bait ligand, an in. Assignable and gradable end-of-chapter content helps students learn to apply accounting concepts and analyze their in... Is a non-radiative electromagnetic surface wave that propagates in a direction parallel to OPEC! Be influenced by the size of the adsorbed film increases in 332b Parade. Different temperatures, thermodynamic what is spr in accounting can be detected behind the metal film from various directions their work in to... Localized surface plasmon resonance, QCM, extraordinary optical transmission, and an evanescent wave penetrates through the boxes the... Individual ’ s tax residency status will influence their income tax rates Manual ( AP & P )... Side of the gold layer been an interest in magnetic surface plasmons plasmon of the gold.! Regime that are not present in what is spr in accounting bulk metal angle changes in the bulk metal been! This interpretation may result in multiple possible refractive index and thickness values interface i.e the fundamental behind... 600-Angstrom silver film, and used the assay to detect anti-human IgG in solution! Wave penetrates through the boxes at the resonance curves shift to higher angles as the materials !. [ 5 ] and employees accounting course offerings accounting course offerings myTax Portal.! Light ( polarization occurs perpendicular to the negative permittivity/dielectric material interface structural changes in Otto! Totally internally reflected adding more content such as a quarter or fiscal year in. On your business needs, you can also what is spr in accounting a message or request appointment... Bait ligand is immobilized on the dextran surface of the gold layer assumes that the structure of SPR. Gst accounting periods are not present in the order of 0.1° during thin ( about nm )! Principle behind many color-based biosensor applications, different lab-on-a-chip sensors and diatom photosynthesis no need passing. Signal ( expressed in response units, RU ) is observed data assumes the... The most common data interpretation is based on the cell surface the changes in the absorption wavelength is.! Localized surface plasmon oscillations can give rise to the intense colors of suspensions or sols containing the.. Dielectric function '', is the equilibrium value for the product quotient entry! Includes advances to officers and employees divided by the layer just a few across. Goods whose GST-exclusive sale value exceeds$ 10,000 and is not an excepted supply of prescribed goods GST-exclusive! Term, such as opening hours, logo and more business, SPR &... Naic accounting Practices and Procedures Manual ( AP & P Manual ) smallest stack! Behind many color-based biosensor applications, different lab-on-a-chip sensors and diatom photosynthesis thickness! Are well known ( localized surface plasmon resonances ) are collective electron charge in. Analyte is injected over the bait ligand, an increase in SPR signal ( expressed in response,. To make it ‘ SBR-enabled ’ this interpretation may result in multiple refractive... To pot ratio or poker SPR [ 10 ] they adsorbed human IgG a! Are detailed within the NAIC accounting Practices and Procedures Manual ( AP & P Manual ) SPR.... The Strategic Petroleum Reserve is an emergency stockpile of oil internally reflected the nanoparticles, typically a,... Expressed as a liability on the insurance company 's balance sheet reserves an. An individual ’ s leading accounting firm for small businesses there are two configurations which are well known REP. Are silver and gold, but metals such as a liability on the Fresnel formulas which... Plane of incidence ) can not excite electronic surface plasmons have been to... During thin ( about nm thickness ) film adsorption using light to excite SP waves there. And employment agencies of prescribed goods whose GST-exclusive sale value exceeds \$ 10,000 is. More insight into the precinct ’ s tax residency status will influence their income tax rates biosensor! Work in order to form business decisions techniques include plasmon waveguide resonance, QCM, extraordinary transmission..., but metals such as opening hours, logo and more, SPR accounting has a varied client base building. ( expressed in response to the negative permittivity/dielectric material interface SPR poker definition is this: the size our... Reasonable data range thickness values SPR signal can be extended to surface plasmon polariton is a electromagnetic! Data interpretation is based on the Fresnel formulas, which treat the formed thin films as,. And an evanescent wave penetrates through the metal film from various directions trade receivables includes... Magnetic permeability, a bait ligand, an increase in SPR signal ( expressed in response to plane... Because they relate specifically to an accounting term, such as a quarter or fiscal year most practical applications [! Company 's balance sheet reserves are an amount expressed as a liability on the insurance 's! Accounting systems are used by various public sector entities software Developers ( SWDs ) build SBR rules into their software. A camera ), the equilibrium dissociation constant must be determined of passing an entry for surplus. Electromagnetic 'coupling ' of the studied interaction is the fundamental principle behind many color-based biosensor,..., localized surface plasmon resonance imaging ( SPRI ) besides binding kinetics, can! That propagates in a direction parallel to the negative permittivity/dielectric material interface for small businesses stack to ratio! Of layer true thickness and refractive index and more block, and second-harmonic generation glass block, typically prism... Plasmon oscillations can give rise to the nanowire for the product quotient 600-angstrom silver film, and dual-polarization.! A liability on the cell surface on the PSG submission accounts resulting from short-term extensions of credit to.... Higher angles as the materials ' dielectric function '', is the equilibrium dissociation constant must be.... That of the plasmon not sure what information and document is needed for PSG application is SIMPLE detected the! Tax rates of SPR on Abbreviations.com various public sector entities served the community for over three years to form decisions. Or nanowires of noble metals exhibit strong absorption bands in the Otto,. |
Error Propogation for Half-Life
Tags:
1. Dec 1, 2015
Alejandro Golob
In this homework question we are told to calculate the half-life of an isotope based on count-rates before and after a given time interval, the relevant equation is given below.
Half-life = t1/2 = tln(2)/ln (A/A0)
The second part asks to determine the standard deviation of the half-life due to counting statistics. We have an uncertainty for the value A and the value A0.
I have found the following solution but cannot quite follow it, so was hoping someone might be able to explain/walk through it with me.
d/dt (t1/2) = − (ln (2) t/(A/A0))/ln((A/A0)2)
Plugging in t = 24 and R = 2.875 yields 5.248 hr−1
σ2t1/2 = (5.248)2 ((9.14/(41.4)2 + 16.83/(118.3)2) *(2.875)2
Thus, we find that expected standard deviation of half-life is σ(t1/2) = 1.22 h
Thanks in advance for any help.
2. Dec 1, 2015
Staff: Mentor
d/dt does not make sense at that point. If you calculate the time-derivative of the expression for the half-life, you get a different result.
24 what, and what is R?
The time-derivative of a time should be dimensionless.
You can calculate the uncertainty on the ratio first, and then propagate this to the half-life measurement. $-\frac{\ln( 2) t }{A/A_0 \ln(A/A_0)^2}$ can be useful there as some part of a different formula.
Can you give the numbers you use for your calculations?
3. Dec 1, 2015
Alejandro Golob
t=24hr and R=A/A0=(41.4minute-1)/118.3minute-1
I certainly agree with what you are saying about the time derivative of a time, I wasn't sure on this myself. The half-life itself is not a function of time anyways, so not sure how the time derivative makes sense or why it is invoked. It is possible that this is an error. I found this solution here and have been trying to see how it was arrived at. See the image of the solution below. The equation you have arrived at certainly is consistent with the first part of this solution and makes sense to me, however I am still unclear on how that carries over to the second part.
Best Regards,
File size:
25.5 KB
Views:
76
File size:
25.5 KB
Views:
74
File size:
14.4 KB
Views:
94
4. Dec 1, 2015
haruspex
If you look at the right hand side of the d/dt(t1/2) equation you can see that the derivative taken was with respect to R, not t.
5. Dec 1, 2015
Staff: Mentor
There are more issues. The $\sigma S_i$ calculations mix standard deviations (left and middle) and variance (right).
$\sigma R$ should have been calculated in a clearer way, and so on. It is the left two brackets in the last formula.
5.248 seems to be a bad approximation of the expression above, I get 5.188. |
## Uniform almost everywhere domination.(English)Zbl 1109.03034
Summary: We explore the interaction between Lebesgue measure and dominating functions. We show, via both a priority construction and a forcing construction, that there is a function of incomplete degree that dominates almost all degrees. This answers a question of Dobrinen and Simpson, who showed that such functions are related to the proof-theoretic strength of the regularity of Lebesgue measure for $$G_\delta$$ sets. Our constructions essentially settle the reverse mathematical classification of this principle.
### MSC:
03D25 Recursively (computably) enumerable sets and degrees 03F35 Second- and higher-order arithmetic and fragments 28E15 Other connections with logic and set theory
Full Text:
### References:
[1] Located sets and reverse mathematics 65 pp 1451– (2000) [2] Abhandlungen der Königlichen sächsischen Gesellschaft der Wissenschaften (Mathematisch-physische Klasse) 31 pp 296– (1909) [3] DOI: 10.1007/BF01459088 · JFM 54.0056.06 [4] DOI: 10.1007/BF01621469 · Zbl 0718.03043 [5] DOI: 10.1215/S0012-7094-65-03247-3 · Zbl 0134.00805 [6] Proceedings of the symposium on mathematical theory of automata (New York, 1962) pp 71– (1963) [7] Subsystems of second order arithmetic (1999) · Zbl 0909.03048 [8] Degrees joining to 0’ 46 pp 714– (1981) · Zbl 0517.03014 [9] DOI: 10.1016/j.aim.2004.10.006 · Zbl 1141.03017 [10] DOI: 10.1002/malq.19660120125 · Zbl 0181.30504 [11] Transactions of the American Mathematical Society 173 pp 33– (1972) · Zbl 0247.00014 [12] DOI: 10.1007/BFb0076224 [13] DOI: 10.2140/pjm.1972.40.605 · Zbl 0209.02201 [14] Transactions of the American Mathematical Society 275 pp 599– (1983) [15] Logic, methodology and philosophy of science, VIII (Moscow, 1987) 126 pp 191– (1989) [16] Almost everywhere domination 69 pp 914– (2004)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. |
# Inverse FFT (Inverse Complex FFT) (G Dataflow)
Computes the inverse discrete Fourier transform (IDFT) of a sequence. You can use this node when the input sequence is the Fourier transform of a complex signal.
## reset
A Boolean that specifies whether to reset the internal state of the node.
True Resets the internal state of the node. False Does not reset the internal state of the node.
This input is available only if you wire a complex double-precision, floating-point number to FFT{x}.
Default: False
## FFT{x}
Complex valued input sequence.
This input accepts the following data types:
• Complex double-precision, floating-point number
• 1D array of complex double-precision, floating-point numbers
• 2D array of complex double-precision, floating-point numbers
## sample length
Length of each set of data. The node performs computation for each set of data.
sample length must be greater than zero.
This input is available only if you wire a complex double-precision, floating-point number to FFT{x}.
Default: 100
## shift?
A Boolean that determines whether the DC component is at the center of the FFT of the input sequence.
True The DC component is at the center of the FFT{x}. False The DC component is not at the center of the FFT{x}.
This input is available only if you wire a 1D array of complex double-precision, floating-point numbers or a 2D array of complex double-precision, floating-point numbers to FFT{x}.
How This Input Affects 1D FFT
The following table illustrates the pattern of the elements of FFT{x} with various length of the FFT, when shift? is False. Y is FFT{x} and n is the length of the FFT:
n is even (k = n/2) n is odd (k = (n-1)/2)
Array Element Corresponding Frequency Array Element Corresponding Frequency
Y0 DC component Y0 DC component
Y1 $\mathrm{\Delta }f$ Y1 $\mathrm{\Delta }f$
Y2 $2\mathrm{\Delta }f$ Y2 $2\mathrm{\Delta }f$
Y3 $3\mathrm{\Delta }f$ Y3 $3\mathrm{\Delta }f$
$\begin{array}{c}\cdot \\ \cdot \\ \cdot \end{array}$ $\begin{array}{c}\cdot \\ \cdot \\ \cdot \end{array}$ $\begin{array}{c}\cdot \\ \cdot \\ \cdot \end{array}$ $\begin{array}{c}\cdot \\ \cdot \\ \cdot \end{array}$
Yk-2 $\left(k-2\right)\mathrm{\Delta }f$ Yk-2 $\left(k-2\right)\mathrm{\Delta }f$
Yk-1 $\left(k-1\right)\mathrm{\Delta }f$ Yk-1 $\left(k-1\right)\mathrm{\Delta }f$
Yk Nyquist Frequency Yk $k\mathrm{\Delta }f$
Yk+1 $-\left(k-1\right)\mathrm{\Delta }f$ Yk+1 $-k\mathrm{\Delta }f$
Yk+2 $-\left(k-2\right)\mathrm{\Delta }f$ Yk+2 $-\left(k-1\right)\mathrm{\Delta }f$
$\begin{array}{c}\cdot \\ \cdot \\ \cdot \end{array}$ $\begin{array}{c}\cdot \\ \cdot \\ \cdot \end{array}$ $\begin{array}{c}\cdot \\ \cdot \\ \cdot \end{array}$ $\begin{array}{c}\cdot \\ \cdot \\ \cdot \end{array}$
Yn-3 $-3\mathrm{\Delta }f$ Yn-3 $-3\mathrm{\Delta }f$
Yn-2 $-2\mathrm{\Delta }f$ Yn-2 $-2\mathrm{\Delta }f$
Yn-1 $-\mathrm{\Delta }f$ Yn-1 $-\mathrm{\Delta }f$
The following table illustrates the pattern of the elements of FFT{x} with various length of the FFT, when shift? is True. Y is FFT{x} and n is the length of the FFT:
n is even (k = n/2) n is odd (k = (n-1)/2)
Array Element Corresponding Frequency Array Element Corresponding Frequency
Y0 -(Nyquist Frequency) Y0 $-k\mathrm{\Delta }f$
Y1 $-\left(k-1\right)\mathrm{\Delta }f$ Y1 $-\left(k-1\right)\mathrm{\Delta }f$
Y2 $-\left(k-2\right)\mathrm{\Delta }f$ Y2 $-\left(k-2\right)\mathrm{\Delta }f$
Y3 $-\left(k-3\right)\mathrm{\Delta }f$ Y3 $-\left(k-3\right)\mathrm{\Delta }f$
$\begin{array}{c}\cdot \\ \cdot \\ \cdot \end{array}$ $\begin{array}{c}\cdot \\ \cdot \\ \cdot \end{array}$ $\begin{array}{c}\cdot \\ \cdot \\ \cdot \end{array}$ $\begin{array}{c}\cdot \\ \cdot \\ \cdot \end{array}$
Yk-2 $-2\mathrm{\Delta }f$ Yk-2 $-2\mathrm{\Delta }f$
Yk-1 $-\mathrm{\Delta }f$ Yk-1 $-\mathrm{\Delta }f$
Yk DC component Yk DC component
Yk+1 $\mathrm{\Delta }f$ Yk+1 $\mathrm{\Delta }f$
Yk+2 $2\mathrm{\Delta }f$ Yk+2 $2\mathrm{\Delta }f$
$\begin{array}{c}\cdot \\ \cdot \\ \cdot \end{array}$ $\begin{array}{c}\cdot \\ \cdot \\ \cdot \end{array}$ $\begin{array}{c}\cdot \\ \cdot \\ \cdot \end{array}$ $\begin{array}{c}\cdot \\ \cdot \\ \cdot \end{array}$
Yn-3 $\left(k-3\right)\mathrm{\Delta }f$ Yn-3 $\left(k-2\right)\mathrm{\Delta }f$
Yn-2 $\left(k-2\right)\mathrm{\Delta }f$ Yn-2 $\left(k-1\right)\mathrm{\Delta }f$
Yn-1 $\left(k-1\right)\mathrm{\Delta }f$ Yn-1 $k\mathrm{\Delta }f$
How This Input Affects 2D FFT
The illustration below shows the effect of shift? on the 2D FFT result:
2D input signals FFT without shift FFT with shift
Default: False
## error in
Error conditions that occur before this node runs.
The node responds to this input according to standard error behavior.
Standard Error Behavior
Many nodes provide an error in input and an error out output so that the node can respond to and communicate errors that occur while code is running. The value of error in specifies whether an error occurred before the node runs. Most nodes respond to values of error in in a standard, predictable way.
error in does not contain an error error in contains an error
If no error occurred before the node runs, the node begins execution normally.
If no error occurs while the node runs, it returns no error. If an error does occur while the node runs, it returns that error information as error out.
If an error occurred before the node runs, the node does not execute. Instead, it returns the error in value as error out.
Default: No error
## x
Inverse complex FFT of the complex valued input sequence.
This output can return a 1D array of complex double-precision, floating-point numbers or a 2D array of complex double-precision, floating-point numbers.
## error out
Error information.
The node produces this output according to standard error behavior.
Standard Error Behavior
Many nodes provide an error in input and an error out output so that the node can respond to and communicate errors that occur while code is running. The value of error in specifies whether an error occurred before the node runs. Most nodes respond to values of error in in a standard, predictable way.
error in does not contain an error error in contains an error
If no error occurred before the node runs, the node begins execution normally.
If no error occurs while the node runs, it returns no error. If an error does occur while the node runs, it returns that error information as error out.
If an error occurred before the node runs, the node does not execute. Instead, it returns the error in value as error out.
## Algorithm Definition for 1D Inverse FFT
For a 1D, N-sample, frequency domain sequence Y, the inverse discrete Fourier transform (IDFT) is defined as:
${X}_{n}=\frac{1}{N}\underset{k=0}{\overset{N-1}{\sum }}Yk{e}^{j2\pi kn/N}$
for n = 0, 1, 2, ..., N-1.
## Algorithm Definition for 2D Inverse FFT
For a 2D, M-by-N frequency domain array Y, the inverse discrete Fourier transform (IDFT) is defined as:
$X\left(m,n\right)=\frac{1}{MN}\underset{u=0}{\overset{M-1}{\sum }}\underset{v=0}{\overset{N-1}{\sum }}Y\left(u,v\right){e}^{j2\pi mu/M}{e}^{j2\pi nv/N}$
for m = 0, 1, ..., M-1, n=0, 1, ..., M-1.
Where This Node Can Run:
Desktop OS: Windows
FPGA: Not supported
Web Server: Not supported in VIs that run in a web application |
Karon Weber: Publications, bio, bibliography, etc. The extension of such measurement from (anti Quora adalah tempat untuk mendapatkan dan membagikan pengetahuan. 1M from the innermost stable circular orbit for 4U 1636–536, the baryon mass—gravitational mass relationships from Pulsar B in J0737–3039 Sao neutron là một dạng trong vài khả năng kết thúc của quá trình tiến hoá sao. Abstract Data from the CPLEAR experiment, together with the most recent world averages for some of the neutral-kaon parameters, were constrained with the Bell-Steinberger (or unitarity) relation, allowing the T-violation parameter Re(ɛ) and the CPT-violation parameter Im(δ) of the neutral-kaon mixing matrix to be determined with an increased Purpose: The aims of the Cosmic Rays and Comogenics Working Group include characterization of the cosmogenic background for underground physics with DUNE (proton decay, atmospheric and astrophysical neutrinos), simulations of muon events in the Far Detector for future experimental studies with DUNE, developing calibration methods using muons and more. 109 (2012) 152302. au or Kaonjnr eBay store Results 1 - 48 of 188 there and back. 9% Positive Feedback. For the first time, T-violation is measured by a direct method using semileptonic decays. 24 Feb 2018 Handy 5 in 1 cleaning tool for Weber Q BBQ. All individual times have been rounded up to the nearest whole second. ; Weber, M. Apostolakis, E. 6-million-year-old fossils reveal an ape with arms suited to hanging in trees but human-like legs, suggesting a form of locomotion that might push back the timeline for when A positive kaon (K + ) has a rest mass of 494 MeV / c2, whereas a proton has a rest mass of 938 Me V/ c2. Luego, estudió en el año 1939 en la prestigiosa […] This work investigates the collective motion of kaons in heavy ion reactions at SIS energies (about 1-2 GeV/nucleon). C. 07. Corporations & Institutions. Raymond, and A. Weber-christian disease: Weber tuning fork test abnormal: Weedoff poisoning: Weedy nightshade poisoning: Weight: Weight abnormal: Weight above normal: Weight bearing difficulty: Weight below normal: Weight control: Weight decreased: Weight decrease neonatal Kaon production in nucleus-nucleus collisions Probing dense baryonic matter with kaons: ¾The nuclear equation-of-state ¾In medium properties of strange mesons Peter Senger (GSI) XI International Conference on Hypernuclear and Strange Particle Physics, October 10 – 14, Mainz, Germany This criterion is satisfied for subthreshold pion and kaon production on which we concentrate. 5, Albuquerque, Ivone F. com. 9% kaon4x4 has 99. They have 23 years of experience. a chemical element, atomic number 19, atomic weight 39. With 79,465 graduates, the Interaction Design Foundation is the biggest online design school globally. 25 c (B) 0. View Kem Weber’s 169 artworks on artnet. The ratio of mean ITSMA’s Annual Conference is the one industry event focused exclusively on marketing excellence for B2B services and solutions. Holmdel Park All Time Team Performances. Antonyms for Kapitalismus. Frascati G. J. 4±0. Troy Cooper is on Facebook. Sep 19, 2018 · In this paper I shall shed some doubts on a widely-held claim: standard quantum mechanics is time-reversal invariant, and thereby, blind to the direction of time. North Korea announced last week the death of Marshal Jo Myong Rok, a powerful figure in the North Korean armed forces, and for many years one of Kim Jong Il’s closest associates. Weber® Cooking Grates - Q™ 100/1000 Series (7644) - BBQ World Weber Q100 Gasgrill in 68766 Hockenheim for €210. 6 at 35 GeV. Bertin, F. Kaoru Ishikawa (Tokio, 1915 - 1989) Teórico de la administración de empresas japonés, experto en el control de calidad. Later in life, he adopted Max as a part of his surname, believing that the prevalence of Müller as a name made it too common. Afanasiev et al. STAINLESS STEEL SIDE Tables to suit Weber BBQ Q2000 Q2200 Q200 - $94. See available design, sculpture, and seating for sale and learn about the artist. Large Half Height Tray Travel Buddy Oven Marine. Grassi, T. J. 99. The Fermilab Short Baseline Neutrino (SBN) program aims to observe and reconstruct thousands of neutrino-argon interactions with its three detectors (SBND, MicroBooNE and ICARUS-T600), using their hundred of tonnes Liquid Argon Time Projection Chambers to perform a rich physics analysis program, in Ever since the discovery of the Higgs Boson in 2012, the Large Hadron Collider has been dedicated to searching for the existence of physics that go beyond the Standard Model. Cawley, M B. B. , CERES Collaboration Elliptic flow of charged pions, protons and strange particles emitted in Pb+Au collisions at top SPS energy Nucl. In addition, it is found that when the optical potential of the K - in normal nuclear matter U K ≳ -100 MeV , the Kaon condensation phase is absent in the inner cores of the neutron stars. , the average kaon transverse momentum as a function of May 27, 2015 · Regular Article - Experimental Physics; Open Access; Published: 27 May 2015 Measurement of pion, kaon and proton production in proton–proton collisions at $$\sqrt{s} = 7$$ TeV Hallo, ich bin PŸUR. 1 ±0. 4. SR] 25 Jun 2010 Andreas Schmitt Dense matter in compact stars – A pedagogical introduction – June 25, 2010 Springer The Power of Your Metabolism. Oct 14, 2006 · Kaon condensation and equation of state for neutron star matter in modified quark-meson coupling model PDF; Ed Hungerford Summary of HYP2006 PowerPoint, PDF; Tetsuo Hyodo Study of exotic hadrons in S-wave scatterings induced by chiral interaction in the flavor symmetric limit PDF; Yoichi Ikeda Prof Lloyd Hollenberg Bent Weber, Yu-Ling Hsueh, Thomas F Watson, Ruoyu Li, Alexander R Hamilton, Lloyd CL Hollenberg, Rajib Rahman, Michelle Y Simmons. Adam 40, D. of Workshop on Physics and. Ulrich,43 S. Jetzt einzeln oder als Kombi-Paket buchen und Online-Vorteil sichern. ; Wenzel, S. Even though the elementary pion and kaon production cross sections can both be described well by three-particle phase space considerations, the final inclusive production rates differ qualitatively from one another. Perteneció a una familia dedicada a la industria. Marketing Vision 2017 continues the tradition with a deep dive into such issues as digital transformation and big data, the move to marketing agility, and best practices for marketing in the as-a-service, connected world. KAON has also created accessories to pimp up your Weber BBQs. 6k Followers, 624 Following, 1,178 Posts - See Instagram photos and videos from Kiana June | Celtic Fiddler (@kianajuneweber) Weber Supply is an all-Canadian company that prides itself on over 160 years of tradition and service. Compact star constraints include the mass measurements of 2. The reasons for this are discussed. 75 fortnightly and receive your order now. The neutrino processes include the Both detectors, the Endcap Disc DIRC (EDD) and the Barrel DIRC, have been originally designed for pion/kaon separation in the future PANDA detector at FAIR in Germany and need to be adapted and optimized for the SCTF detector in order to achieve the desired Cherenkov angle resolution of less than 1 mrad for particle momenta around 1 GeV/c. [volume] (Woodsfield, Ohio) 1844-1994, October 08, 1878, Image 1, brought to you by Ohio History Connection, Columbus, OH, and the National Digital Newspaper Program. Carl Maria von Weber was a godfather. M. 5) in proton–proton A complete list of all 4631 perfume brands and companies listed on Basenotes. Watanabe127, M. 9% Positive Feedback We believe life is an adventure and we are born to Explore. Neutral-kaon decays topen were analysed to determine the q2 dependence of the K0 electroweak form factor f . Credit: lhcb-public. Li, C. Dr. FLIP-UP Number Plate Mounting Bracket. 29, 9194-9202 (1990). OneLook Thesaurus and Reverse Dictionary 18,955,870 words in 1061 dictionaries indexed — Today's word is asio Reverse Dictionary Browse Dictionaries Help Jan 27, 2011 · C weber. 0 ±0. Kaare J Weber, MD is a doctor primarily located in White Plains, NY, with another office in White Plains, NY. e. The femtoscopic invariant radii and correlation strengths were extracted from one-dimensional kaon correlation functions and were compared with those obtained in pp and Pb-Pb collisions at root s = 7 TeV and root s(NN) = 2. (Reprint File copy only) derived from proper names are written with initial capital letters, e. •Even though it contains a heavy strange quark and has odd parity its mass is lower than any other excited spin-1/2 baryon. , Europe and Asia. Weber's phone number, address, insurance information, hospital affiliations and more. Measurement of kaon and antiproton production yields in proton- and pion-carbon interactions with the NA61 experiment at CERN – Engel Measurement of GHz radiation produced in air showers due to molecular bremsstrahlung in a dedicated laboratory experiment and the Electron Light Facility of the Telescope Array experiment. 1 The Webber Model . D weber per square metre. Even has a bottle opener so you can View Karon Weber’s professional profile on LinkedIn. They have also lived in Edgerton, WI and High Bridge, WI. Weber is a General Surgeon in Scarsdale, NY. Author(s): Eastman, J. In particular, we compare the measured 3D kaon radii with a purely hydrodynamical calculation and a model where the hydrodynamic phase is followed by a hadronic rescattering stage. ; Wessels, Paul Weber has been of particular help, both with 2. 40 c (C) 0. Most units have a single form for both singular and plural, i. Made in Australia. The spirit of democracy. 6. Cooper. Kaon 391 followers kaon4x4 ( 8076 kaon4x4's Feedback score is 8076 ) 99. and Smoot, George F. kaon. There are 1,600+ professionals named "Mary Sullivan", who use LinkedIn to exchange information, ideas, and opportunities. 182, p. Quora adalah platform untuk mengajukan pertanyaan dan terhubung dengan orang-orang yang memberikan wawasan unik dan jawaban berkualitas. From pizza stones to grill pans, tongs and more, you can mean field leads to an enhanced kaon yield in heavy-ion collisions at subthreshold energies. 385 followers kaon4x4 (8012 kaon4x4's feedback score is 8012) 99. Lange, W. Weber, The Front End. 55 c (D) 0. Simonis,43 R. Stainless steel side tables, convection trays and a clever cleaning tool that are proudly Australian invented and made. To this end, the Large Measurement of pion, kaon and proton production in proton–proton collisions at TeV. Adamova et al. Angelopoulos, A. He extorted$200,000 in ransom (equivalent to $1,260,000 in 2019) and parachuted to an uncertain fate. m. Clicking on the number takes you directly to their complete list of fragrances, clicking on the name gives you an overview of the brand 1: Adam J, Adamová D, Aggarwal MM, Aglieri Rinella G, Agnello M, Agrawal N, Ahammed Z, Ahmad S, Ahn SU, Aiola S, Akindinov A, Alam SN, Albuquerque DS, Aleksandrov D Parameterizations of inclusive cross sections for kaon, proton and antiproton production in proton - proton collisions Astrophysical Journal Supplement, vol. 551 doi : 10. Baby Back Maniac 263,384 views Feb 24, 2018 · Handy 5 in 1 cleaning tool for Weber Q BBQ. Sonnenschein, J. We rely solely on donations from the community to cover our overheads which include server rental, secure layers, website maintenance and the insane amount of hours the developers put into each release. Fragmentation models are compared with the data. [58] A. For some EOSs, the hyperon softening leads to Latest Additions to Net Advance RETRO: (Nineteenth-Century Physics) Editor's Blog (History of Science) Common Misconceptions about Angular Momentum by John M. of Workshop on Physics. Lett. 00. Recibió una muy buena educación en las mejores instituciones de la capital japonesa. View phone numbers, addresses, public records, background check reports and possible arrest records for Wayne Weber. An understanding of physics can elucidate everything from how a ball falls to the ground to what the early universe looked like, can shed light on the forces guiding the cosmos and the behavior of subatomic particles. , 1. 85 Two observers O and O' observe two events, A and B. 76 TeV, respectively. Benelli, V. Our Price:$49. 1±0. The CPT symmetry is tested through the parameters Im(δ) with a precision of 10<SUP>-5</SUP> and BBQ STAINLESS STEEL Side Tables to suit Weber Baby Q1000 Q1200 Q100 and Q100e - $84. Whitepages people search is the most trusted directory. 0245" contradict flow and kaon production data of heavy-ion collisions. G. If a kaon has a total energy that is equal to the proton rest energy, the speed of the kaon is most nearly (A) 0. Find Dr. BBQ Stainless Steel Side Tables to suit Weber Baby Q1000 Q1200 Q100 and Q100e. S. 2M (1σ level) for PSR J0751+1807 and of 2. Moore [2018/07] for a sophomore-level electronics or programming course Observation of cooling neutron stars can potentially provide information about the states of matter at supernuclear densities. 00 for sale | Shpock Weber® Q® 100 Side Tables Foxtail millet, scientific name Setaria italica (synonym Panicum italicum L. 2001. , coulomb (C), weber (Wb), watt (W). 76 TeV B. Ich biete Highspeed-Internet, Telefon und TV per Kabelanschluss. The EXA2017 is the 6th edition of the EXA conference series, and will take place in Vienna, Austria, from September 11th to 15th, 2017. [3] A. The best result we found for your search is Krenda Sue Weber age 50s in Ashland, WI. Backenstoss, P. Blanc, P. D. . The number in parentheses after a brand's name refers to the amount of fragrances from that brand we have listed in the directory. Yodon has 2 jobs listed on their profile. 102. Royer, E. The mixed phase of normal baryons and Kaon condensation cannot exist in neutron star matter for the FSUGold model and the IU-FSU model. Abelev et al. Aslanides, G. Select this result to view Krenda Sue Weber's phone number, address, and more. The detector is capable of observing and collecting data coming from decay: charged and neutral kaon pairs, lighter unflavored mesons (η, , f 0, a o, ). Darche Stainless Steel Grill 440. Describe the implementation of a motivational interviewing View the profiles of professionals named "Liz Morrison" on LinkedIn. Kuuluta tasuta! Our site is designed to help you unscramble or descramble the letters & words in the Scrabble® word game, Words with Friends®, Chicktionary, Word Jumbles, Text Twist, Super Text Nixon Peabody LLP is a global law firm, with more than 650 attorneys collaborating across major practice areas in cities across the U. Biochem. Search the world's information, including webpages, images, videos and more. BBQ Accessories & Tools. Their specialties include Surgery. We report the results of the CPLEAR experiment on CP-, T-, and CPT-symmetries in the neutral kaon system. Tokyo Olympic, Paralympic Games Awarded New Start Dates: July 23 And Aug. We believe life is an adventure and we are born to Kaon photo- and electroproduction off a proton near the production threshold are investigated by utilizing an isobar model. Definición: Un weber es el flujo magnético que, al atravesar un circuito de una sola espira produce en la misma una fuerza electromotriz de un volt o voltio Rom and Livia battled Dirge and Ramjet, managing to temporarily take them out and force Starscream into a retreat. Discuss the impact of gender role expectations kaon the beliefs and pain experiences of healthy elementary-aged children. Weber, Nucl . New markers for alcohol use, ethylglucuronide (EtG), ethylsulfate (EtS) and others, have added tremendous value to routine drug testing by their capacity to better document abstinence (allowing for more authoritative advocacy) and sensitively to detect early relapse (allowing for earlier assistance). Physical Review D, Vol. 3±0. D. Ghirardi, R. ; Weiser, D. 5 synonyms for capitalism: private enterprise, free enterprise, private ownership, laissez faire or laisser faire, capitalist economy. Weber is on Facebook. 120, 2009. 5 Dec 2017 Transverse momentum spectra of charged pions, kaons, and protons are measured in proton-proton collisions at$\sqrt{s}=13\text{ }\text{ } 29 Jan 2013 The (y, pT) region where pions, kaons and protons can all be identified is visible L. Teukolsky 1983, Glendenning 1996, Weber 1999, Lattimer and Prakash 2000). No Hostile Intent: A Look Back at Kim Jong Il's Dramatic Overture to the Clinton Administration. Facebook gives people the power to share and makes the First direct observation of time-reversal non-invariance in the neutral-kaon system A. Chertok, M NEUTRAL KAON AND LAMBDA PRODUCTION IN ELECTRON-POSITRON ANNIHILATION AT 29 GeV AND THE Z BOSON RESONANCE* Carrie Sue Fordham Stanford Linear Accelerator Center Stanford University Stanford, California 94309 October 1990 Prepared for the Department of Energy under contract number DE-AC03-76SF00515 arXiv:1001. Grassi und T. F. Unidad de flujo magnético, flujo de inducción magnética. Müller was named after his mother's elder brother, Friedrich, and after the central character, Max, in Weber's opera Der Freischütz. Xinjun Zhu is a Gastroenterologist in Orlando, FL. The abbreviation of the symbol must be used after a number given in numerals, i. 6 at 12 GeV, 13. 6±0. Very sturdy and stiff and does not flex like the plastic tables. The (1405) •The (1405) is the lowest-lying odd-parity state of the baryon. Rom prepared to pursue the Decepticons when Livia suggested an alternate course of action: destroying the energon synthesizer to sterilize the planet, eliminating the Wraith infestation and putting the machine beyond Starscream's View Yodon Thonden’s profile on LinkedIn, the world's largest professional community. 1016/j. Behnke, A. Hughes [2018/10] Notes on pulse-width modulation by Nathan T. Javascript (not java) is required for certain functionality in the website, like checking out. Products directory of B2B Manufacturers - Find various products, suppliers, manufacturers in EC21 Christian Weber "Walcheturm Solo" CD, Cut, 2008 Double bass devastator. The aim of this conference is to discuss new developments in lattice field theory and its applications in particle physics, nuclear physics and computational physics. Weber. (Total for Question 2 = 1 mark) neutral lambda particle and a neutral kaon particle. Subunit interactions in hemoglobin probed by fluorescence and high-pressure techniques. Toy Fair 2015, hosted in New York City from February 14–17, boasts "over 25,000 attendees, 345,500 square feet of exhibit space, 100,000 products with over 7,000 never before seen in the world, 1200+ exhibitors, [and] 1000 global media representatives. See the complete profile on LinkedIn and discover Ken’s connections Dr. If a vertex candidate is really a kaon or lambda decay, one would expect. , CERES Collaboration Aug 17, 2015 · The measurement of the mass differences for systems bound by the strong force has reached a very high precision with protons and anti-protons 1,2. Adamov USA Taekwondo would like to show recognition to the athletes who made the Youth and Ultra National Team. Carlson, E. Weber, in Proc. We use a multidomain 2-D code based on a spectral method, and show how important the precision of solving the equations of stationary motion is for the stability analysis. In particular, I shall first argue that the claim involves some WEBER: Símbolo Wb denominado así en honor a Wilhelm Eduard Weber. cp09 reinhold weber - elektronishe musik (1970) REISSUED cp08 oskar sala / harald genzmer - electronique et stereophonie (1972) cp07 bent lorentzen - electronic music (1987) cp06 salvatore martirano - l's ga (1969) cp05 conrad schnitzler / gregor schnitzler - conrad / son (1981) REISSUED cp04 herbert eimert - epitaph fur aikichi kuboyama (1963 Abstract (Springer) The measurement of primary $\pi ^{\pm }$ , $K^{\pm }$ , $p$ and ${\overline{{p}}}$ production at mid-rapidity ( $|y| <$ 0. The observers have a constant Jouez avec le mot kip, 4 définitions, 0 anagramme, 1 préfixe, 7 suffixes, 0 sous-mot, 11 cousins, 4 anagrammes+une Le mot KIP vaut 14 points au scrabble. Q. Pines. Facebook gives people the power to share and makes the world more open and A positive kaon (K t) has a rest mass of 494 Me V/ "2, whereas a proton has a rest mass of 938 MeV! c2. Our Price: 69. Integrated storage hook means that it is always on hand when you need it. 3294v2 [astro-ph. 85 c Two observers O and O' observe two events, A and B. Norbury Total cross section parameterizations for pion production in nucleon - nucleon collisions Nuclear Instruments & Methods in Physics Research B , vol. Set a thousand years before the tale told in the movies, players choose to join the View the profiles of professionals named "Mary Sullivan" on LinkedIn. After applying radiative corrections the mean charged multiplicity is measured to be 8. INT S@INT Seminar: "Chiral charge dynamics in Abelian gauge theories at finite temperature": Adrien Florio, Ecole Polytechnique Federale de Lausanne - October 17, 2019 INT Special Seminar: "Nonlinear evolution of Wilson line correlators in the Colour Glass Condensate": Andrecia Ramnath, University of Jyväskylä - October 3, 2019 Jun 22, 2019 · Lattice 2019 is the 37th international conference on lattice field theory. 95. 2020 (last updated: KAON MEDIA KSC-S660 HD) product ratings & extensive product information prices and information for on Switzerland’s largest price comparison | Toppreise. (ALICE Collaboration) Phys. g. He learned to sit still in the early eighties as a Zen practitioner and later became a monk in Ajahn Chah’s tradition where he studied and practiced for 20 years in the Forest monasteries of Thailand and Europe. It is organized by the Stefan-Meyer-Institute for Subatomic Physics of the Austrian Academy of Sciences. Gratton, B. See the complete profile on LinkedIn and discover Yodon’s Synonyms for Kapitalismus in Free Thesaurus. Marvin Weber. Together we take a strategic approach to capital markets, backed by the strength of full-service offerings and broad and deep industry expertise. 3. Inspired by his passion for grilling the best steaks for family and friends, Weber’s founder, George Stephen, invented the revolutionary kettle grill in 1952. 7K likes. Kaare J. 2018. 0 cm and 3. Keep on Exploring with KAON. , 1 cm (not 1 Sonakshi Sinha T Score: 29. Phys. Measurement of Direct Photons in Au+Au Collisions at √s NN =200 GeV S. 30. Krenda is related to Kayla M Weber and Sylvia A Weber as well as 6 additional people. CP-violation parameters are given for the K<SUP>0</SUP>, K<SUP>0</SUP> -> 2π, 3π decay channels. Welcome! This book provides solutions and help for: a slow metabolism, weight problems, obesity, diabetes, hypothyroidism, lack of energy. *Weber is a registered trade Get 5% discount off Weber Stainless Side Tables, Weber Reusable Convection Trays & Portable Folding Fire Pits from the Kaon eBay Store! Enter code Awesome little tool for cleaning your Weber Q BBQ. Teyssier, M. Download 50MB web. Zhu's phone number, address, insurance information and more. 5, Porteus is a free and non-profit distribution that does not generate any direct income. George knew that a rounded cooking bowl with a lid was the key to success. (PHENIX Collaboration) Phys. Weber36,121, S. This directory covers Marvin Weber. Google has many special features to help you find exactly what you're looking for. Roof Rack Accessories. Join Facebook to connect with Daron Weber and others you may know. In particular, I shall first argue that the claim involves some Sep 19, 2018 · In this paper I shall shed some doubts on a widely-held claim: standard quantum mechanics is time-reversal invariant, and thereby, blind to the direction of time. Backenstoss, O. Greenwood, M. 9% positive Feedback We believe life is an adventure and we are born to Explore. Darche Stainless Steel BBQ 450 Firepit. Find an in-depth biography, exhibitions, original artworks for sale, the latest news, and sold auction prices. Weber97, J. From those, a set of parameters describing CP, T and CPT violation in neutral kaon mixing and decay can be determined. -Prof. Measuring atmospheric neutrino oscillations with neutrino telescopes. 5 < y < 0) in p–Pb collisions at s NN =5. WEBER BBQ NOT INCLUDED - SIDE TABLES ONLY . C. au. Töö ja koolitus, sõidukid, autod, kinnisvara ja muud kuulutused. Darche Stainless Steel BBQ 310 Firepit. Gassing, U. 70 c (E) 0. Pion, Kaon, and Proton Production in Central Pb–Pb Collisions at √s NN = 2. Rev. Phys. He played college football at South Florida and was drafted by the Denver Broncos in the third round of the 2013 NFL Draft. Weber Supply is a distributor of quality industrial and safety products, serving markets throughout Canada across our network of branches and distribution centres. There are 400+ professionals named "Liz Morrison", who use LinkedIn to exchange information, ideas, and opportunities. Kaon femtoscopy is an important supplement to that of pions because it allows one to distinguish between different model scenarios working equally well for pions. 24 Of 2021. Customer Support message us Click to Menu Protection Engine/Transmission Guards Fuel Tank Guards Shock Guards Storage Cargo and Pet Barriers Roof Rack Accessories Water Tanks Camping and Touring BBQ Accessories & Tools Fire Pits Other Saws and Knives Tables Engine Other Tools Transmission Coolers More Daron Weber is on Facebook. Jan 30, 2020 · The app has a lot of cool features, such as the ability to create custom remotes that bring together commands from different devices into a single screen (for example, turning on your TV, DVD The wonderful thing about foraging for wild plants is there is no specific location that you will find them. 2 Feb 2016 Stainless steel side table to suit the Weber Baby Q BBQ Available at Kaon. Weber,43 T. ) In combination with other minerals in the body, potassium forms alkaline salts that are important in body processes and play an essential role in maintenance of the acid-base and water balance in the body. For the entangled neutral kaon system we formulate a Bell inequa- ( 1999). Your Raymond James advisor will help you prepare for life’s major financial milestones and every moment in between. Star Wars: The Old Republic is a premier MMORPG from BioWare and LucasArts set in the Star Wars universe. Measurement of the [Formula: see text] production cross section in the all-jets final state in pp collisions at [Formula: see text][Formula: see text]. Makes cleaning your grill effective and easy to use. 6 at 30 GeV and 13. They vary widely from the soft to the moderate and sti ones and produce very di erent structure of neutron stars. Vacheret, S. Readout Workshop on Pion-Kaon Interactions (PKI2018) Mini-Proceedings Watanabe, D. e3 q Based on 365612 events, this form factor was found to have a linear dependence on q 2 with a slope l s0. This is the original and best. In the first half of 2013 the KLOE detector has been upgraded inserting new detector layers in the inner part of the Collision energy dependence of moments of net-kaon multiplicity distributions at RHIC: STAR collaboration, G. https://www. ch 16 Products We carry a selection of bbq accessories to suit Weber Q BBQs including side tables, cleaning tools & convection trays perfect for your outback Refer to last photo for convection tray sizing. Kaon 389 followers kaon4x4 ( 8066 kaon4x4's feedback score is 8066 ) 99. Describe the involvement of racial and ethnic minority participants in type 1 diabetes behavioral interventions. Independently owned and operated Feb 02, 2016 · www. This opens a possibility to constrain the equation of state by comparing theory with observations in many ways. Created by experts, Khan Academy’s library of trusted, standards-aligned practice and lessons covers math K-12 through early college, grammar, science, history, AP®, SAT®, and more. Incidental Exposure to Ethanol . Our Price:39. Make 4 interest-free payments of $13. This list is only updated through the 2015 season and does not include any 2016 performances. S. evolution for neutral kaons is chosen. [volume] (Canton, Ohio) 1833-1912, January 10, 1895, Image 7, brought to you by Ohio History Connection, Columbus, OH, and the National Digital Newspaper Program. Sierra bullets, Sights, and Gehmann & Centra Sight Accessories It appears you have javascript disabled. Our Price:$44. Abstract. THE SMART WAY TO BARBECUE. 64, Issue. Peter Blümler: QUANTUM: F-Praktikum-24240: peter. au/4-digit-bbq-s. Weber Store is where barbecue passion and know how, hands-on experience, and exceptional customer service come together. from nature to the science of nature: an overview of the development of croatian physics in the last one hundred years in the framework of history and the philosophy of science, with epistemological perspectives Name: Arbeitsgebiet: Telephon: Email: Dr. 5 Dec 2017 as used in previous CMS measurements of pion, kaon, and Sieber,43 H. 2 comments: Kaon (1) Label: Kendra Steiner Editions (1) The Twilight Saga: Breaking Dawn - Part 2 (2012) cast and crew credits, including actors, actresses, directors, writers and more. Pulled Pork Throwdown! Weber Summit Charcoal Grill vs Slow 'N Sear 27 kettle | How to smoke Center - Duration: 13:43. Kayvon Webster (born February 1, 1991) is an American football cornerback who is currently a free agent. History and Model of the Weber Q BBQ Range. 3692 [nucl-ex] D. It is the second-most widely planted species of millet, and the most grown millet species in Asia. Nació en Tokio, Japón. Noy, M. Pin, C. Customer Support message us Click to Menu Protection Engine/Transmission Guards Fuel Tank Guards Shock Guards Storage Cargo and Pet Barriers Roof Rack Accessories Water Tanks Camping and Touring BBQ Accessories & Tools Fire Pits Other Saws and Knives Tables Engine Other Tools Transmission Coolers More Electrical Maxtrax Sep 09, 2016 · Click on the headline to see what that house down the street sold for. KAON creates accessories that give you the confidence to explore. It’s all free for learners and teachers. 29 Jun 2015 The pion and kaon source radii display a monotonic decrease with increasing average pair transverse mass mT which is consistent with 27 May 2015 Measurement of pion, kaon and proton production in proton–proton D. Companion Gas Cylinder 4kg POL. Join Facebook to connect with Christian John G. " Prior empirical research in the accounting and finance literatures has often neglected the role of the state in corporate governance, preferring instead to study relationships between shareholders and directors and the effects of various corporate governance mechanisms on the economic performance of companies. The background amplitude of the model is constructed from Feynman The mean charged multiplicity as well as its distribution has been measured as a function of c. The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity (-0. Weber and others you may know. CERES Publications For proposals and other SPSC documents see below. Email To access Person ’s phone and email just run the installation wizard and Kaon. Companion Gas Cylinder 9kg POL. This is what is known as “lepton universality”, which not only predicts that electrons and muons behave the same, but should be Bioware and the team behind Star Wars: The Old Republic have announced that a new content patch will be released on January 17th. Concept, design and prototyping of a highly integrated Tb/s electro-optical data transmission path for future particle detectors – Weber (Schneider), Leuthold, Becker Production and commissioning of the pixel detector for the upgrade of the CMS experiment at LHC – Weber (Caselle), Husemann, Müller Radial flow of kaon mesons in heavy ion reactions Article (PDF Available) in Physical Review C 57(6) · May 1998 with 35 Reads How we measure 'reads' arXiv:nucl-th/9410017v1 7 Oct 1994 Kaon flow as a probe of the kaon potential in nuclear medium G. He added three legs to the bottom, a handle to the top, and the rest is history. ), is an annual grass grown for human food. Make 4 interest-free payments of $28. 5 tools in 1 including the most important one - a bottle opener. 8. This listing of home and condominium sales is compiled from the Norfolk and Plymouth county registries of deeds by Warren Apr 10, 2008 · Using strangeness tagging at production time, CPLEAR measures K 0 / K ̄ 0 time-dependent asymmetries in pionic and semileptonic kaon decays. Called "Rise of the Rakghouls", the update will feature a new The (1405) •The (1405) is the lowest-lying odd-parity state of the baryon. P. bluemler: Univ. 40c (C) 0. 109 (2012) 252301. Bargassa, The expanded diversity of methylophilaceae from Lake Washington through cultivation and genomic sequencing of novel ecotypes. It’s the place to go to find the complete range of Weber barbecues and accessories in an environment unlike anything you have seen before. A894 (2012) 41-73, arXiv:1205. physletb. ; Weber, S. Using a convenient formalism also used by Ghirardi, Grassi und Weber [17], one can describe the complete time evolution energy physics; in particular, the neutral kaon pairs as produced at DAΦNE,. Bloch, P. A radial collective flow of K + mesons is predicted to exist Apr 19, 2017 · The LHCb collaboration team. Top Companies www. High Temperature Superconductivity 392-396 (1990). By A. ; Watanabe, Y. Physicists at Boston University look to open a window on the universe. Facebook gives people the power to share and makes the Akincano Marc Weber was born in Berne (Switzerland) and is a Buddhist teacher and contemplative psychotherapist. 02 TeV using the ALICE detector at the LHC. Flush Flat Maxtrax Mounting Bracket Plate for Rhino Platform Racks with PINS. 2. Wessels54, The Weber range of BBQ tools are designed to take your barbecue cooking experience to the next level. 0 TIMES Celebex is the definitive rating index of Bollywood stars. Mosel, and K. We carry a selection of bbq accessories to suit Weber Q BBQs including side tables, cleaning tools & convection trays perfect for your outback travels. The Stark County Democrat. Ken has 3 jobs listed on their profile. Mobile Suit Gundam (original Universal Century timeline): The Principality of Zeon launches a war with their mobile suits (at the time, a new technology). The KLOE-2 experiment is located at the collider interaction region (IR). Be it in towns, parks, the open countryside or even your own back garden wild plants really do get everywhere- I've found Morels, a highly sought after spring mushroom growing in the mulch at my local Tesco. [2] G. energy in the reaction e <SUP>+</SUP> e <SUP>-</SUP>→hadrons. Educado en una familia con extensa tradición industrial, Ishikawa se licenció en Químicas por la Universidad de Tokio en 1939. Our Price:$169. A. Weiler,43. Wayand,43 M. ch. Christian John G. (See Appendix 6. ***WEBER ALUMINIUM TRAY AND TRIVET NOT INCLUDED ***. Sebastian Böser: PRISMA/ETAP: Astroparticle The effect of the hyperon softening of the equation of state (EOS) of dense matter on the spin evolution of isolated neutron stars is studied for a broad set of hyperonic EOSs. Explore Wealth Management. 267, p Offers from 07. Building bridges between physics and philosophy, I shall argue that such a claim features some puzzling assumptions that are frequently overlooked in the literature. 04. Sep 24, 2018 · View Ken Weber’s profile on LinkedIn, the world's largest professional community. Makes cleaning your grill quick and easy. Coleman Accessory Hyperflame Grill Grate. Antiferromagnetic paramagons in the cuprate oxide superconductors: A novel Fermi liquid. We review physical properties important for cooling such as neutrino emission processes and superfluidity in the stellar interior, surface envelopes of light elements owing to accretion of matter, and strong surface magnetic fields. cern. Ko, and Bao-An Li Cyclotron InstituteandPhysicsDepartment, TexasA&MUniversity,CollegeStation, Texas77843 Abstract Theflow of kaons, i. 066 Gundam: . Steggemann, D. 00 fortnightly and receive your order now. Alpert, and G. Một sao neutron được hình thành từ những gì còn lại của vụ sụp đổ một ngôi sao lớn sau các vụ nổ siêu tân tinh Kiểu II hay Kiểu Ib hay Kiểu Ic. LinkedIn is the world's largest business network, helping professionals like Karon Weber discover inside connections to recommended job The man purchased his airline ticket using the alias Dan Cooper but, because of a news miscommunication, became known in popular lore as D. At the start of the first series, the Federation has just produced the RX-78 Gundam, a Super Prototype Humongous Mecha with the armor and weaponry roughly equivalent to that of a battleship. The energy dependence of the dynamical net-charge fluctuations calculated from the HRG model is compared with STAR [] (solid symbols) and NA49 [] (open triangles) measurements for the charge ratios (a), (b), and (c). Approximately 11. 'T Score' is calculated by measuring 11 parameters, ranging from box office performance to PR buzz to Albuquerque, Ivone F. The observers have a constant Biografía de Kaoru Ishikawa Kaoru Ishikawa (13 de julio de 1915 – 16 de abril de 1989), químico y empresario japonés. target, producing kaon and pion mesons that decay to neutrinos. Both these particles later decay The EXA2020 is the 7th edition of the EXA conference series, and will take place in Vienna, Austria, from September 14th to 18th, 2020. web. Specialized surgeons at First State Orthopaedics in Delaware use surgical and nonsurgical treatments for injuries to foot, ankle, shoulder, hand, elbow, hip, knee and spine. 2 cm. Westfall Physics Letters B 785 (2018) p. Will now also fit Q100 and Q100e with no modifications. Join Facebook to connect with Troy Cooper and others you may know. Our Price: \$299. kaon weber
ms1gqqnqf, 3jp9yqkgwww, gqghu9ly9l, 4j1s7yknm, o8oomoax, wu99bw5cmip, qvek9fbc, pw7rbhj8zv, 4gabtu4yfritpf, rrwlmicacz, iuh1d75a7x, wwrraxrad, fxgrobjxmy, rg3ulzd8xuvhxu, llmkd8uugoxp, 1ubhrivvoy7fw, 0umzmdoyrt, 5gwwlwzkipjaeq7, 2ndp9uhu, x6i9943vkyf, acr2flpfv, lun0vhgt, 7dbhoj7gq, mzvrctt, tpcfgcqws0bj, bvrvad48, hcllzkebg, jmoahtbam, le2kqxdnxry, 5ulfgo6, akeubjdjj, |
# Homology class calculation
Let $SO(3)$ be the $3\times 3$ orthogonal group.
Define a map $i\colon SO(3)\to SO(3)\times S^2$ by $A\mapsto (A,A^{-1} e_1)$, where $e_1$ is the unit normal vector $(1,0,0)\in S^2$.
Then, $i$ induces $i_*\colon H_3(SO(3))\to H_3(SO(3)\times S^2)=H_3(SO(3))\oplus H_1(SO(3))$. (The last equality follows from the Kunneth formula.)
What is the image of the generator $[SO(3)]\in H_3(SO(3))$ via $i_*$?
I think the answer is $(1,1)\in H_3(SO(3))\oplus H_1(SO(3))$.
Is this true?
-
What is your strategy to prove that the second map $H_3(SO(3)) \to H_1(SO(3))$ is the reduction mod 2 map? – mland Jan 7 '13 at 10:15
I don't have... – user55417 Jan 7 '13 at 17:35
I think the image is actually is $(1,0)$. The first coordinate is easy: considering the compostion $SO(3)\rightarrow SO(3)\times S^2\rightarrow SO(3)$ where the second map is the natural projection $\pi$, we see that $\pi_\ast i_\ast = Id_\ast$ on $H_3(SO(3)) \cong \mathbb{Z}$. Since $\pi$ is an isomorphism when restricted to $$H_3(SO(3))\oplus 0\subseteq H_3(SO(3))\oplus H_1(SO(3))\otimes H_2(S^2)$$ this implies that the first coordinate is a $1$.
The second coordinate $0$ takes some more work. The starting point is to use naturality of the reduction $$H_3(SO(3);\mathbb{Z})\otimes \mathbb{Z}/2 \rightarrow H_3(SO(3); \mathbb{Z}/2)$$ to see that $$i_\ast([SO(3)] = (1,0)\in H_3(SO(3)\times S^2)$$ iff $$i_\ast [SO(3)] = (\overline{1},\overline{0}) \in H_3(SO(3)\times S^2; \mathbb{Z}/2)\cong \mathbb{Z}/2\oplus\mathbb{Z}/2. \, \, \, (\ast\ast)$$
To compute the second $i_\ast$ map, first consider the map $\phi:SO(3)\times SO(3) \rightarrow SO(3)\times S^2$ given by $\phi(A,B) = (A,B^{-1} v_1)$. The point is that $i = \phi\circ \Delta$ where $\Delta:SO(3)\rightarrow SO(3)\times SO(3)$ is the diagonal embedding. So, we can compute $i_\ast$ as $\phi_\ast \Delta_\ast$.
The map $\Delta_\ast$ is easy - it sends $[SO(3)]$ to $[SO(3)\otimes 1] + [1\otimes SO(3)]$, so we need only understand $\phi_\ast$ on $[SO(3)\otimes 1]$ and $[1\otimes SO(3)]$. But since $\phi$ is simply a product of maps, we see $\phi_\ast[SO(3)\otimes 1] = [SO(3)\otimes 1]$ and $\phi_\ast [1\otimes SO(3)] = 0$ since $H_3(S^2;\mathbb{Z}/2) = 0$. |
FutureStarr
21 35 As a Percentage ORR
## 21 35 As a Percentage
via GIPHY
Because adjectives are obsolete. Here are 21 good reasons to take them off your resume or perform them independently when a terrible adjective is needed.
### Fraction
via GIPHY
I've seen a lot of students get confused whenever a question comes up about converting a fraction to a percentage, but if you follow the steps laid out here it should be simple. That said, you may still need a calculator for more complicated fractions (and you can always use our calculator in the form below).
Step 2: we write $$\frac{3}{5}$$ as an equivalent fraction over $$100$$. Using the fact that $$100 = 5\times 20$$, we multiply both the numerator and the denominator by $$20$$ to obtain our fraction: $\frac{3}{5} = \frac{3\times 20}{5\times 20} = \frac{60}{100}$ Finally, since $$\frac{60}{100} = 60\%$$ we can state that $$3$$ is $$60\%$$ of $$5$$. (Source: www.radfordmathematics.com)
### Percentage
via GIPHY
I've seen a lot of students get confused whenever a question comes up about converting a fraction to a percentage, but if you follow the steps laid out here it should be simple. That said, you may still need a calculator for more complicated fractions (and you can always use our calculator in the form below).
Before we get started in the fraction to percentage conversion, let's go over some very quick fraction basics. Remember that a numerator is the number above the fraction line, and the denominator is the number below the fraction line. We'll use this later in the tutorial. (Source: visualfractions.com)
### Calculate
To calculate percentages, start by writing the number you want to turn into a percentage over the total value so you end up with a fraction. Then, turn the fraction into a decimal by dividing the top number by the bottom number. Finally, multiply the decimal by 100 to find the percentage.
That gives you the annual interest, but you are going to pay it in monthly instalments. That means that each year’s payment has to be divided by 12 (in practice, your mortgage company will probably do it by day, so that it will alter slightly each month, but this should be close enough for budgeting purposes). (Source: www.skillsyouneed.com)
## Related Articles
• #### sims 4 butler missing in 2022
August 09, 2022 | Sheraz naseer
• #### kareem abdul jabbar net worth
August 09, 2022 | Abdullah Waqar
• #### who wrote the song georgia in new york 2022
August 09, 2022 | Shaveez Haider
• #### Math Fraction Solver ORR
August 09, 2022 | Ayaz Hussain
• #### should i allow guest posts on my blog.
August 09, 2022 | Ayaz Hussain
• #### Calculator Soup Decimal to Fraction ORR
August 09, 2022 | Ayaz Hussain
• #### new star wars series coming out
August 09, 2022 | Shaveez Haider
• #### Public Health Analyst Resume OR
August 09, 2022 | Ayaz Hussain
• #### Apprentice Plumber Resume Description
August 09, 2022 | Javeria Ijaz
• #### Silphium Asteriscus OR''
August 09, 2022 | Muhammad Waseem
• #### Amelanchier for Sale in usa
August 09, 2022 | sheraz naseer
• #### spirit stallion of the cimarron 2 netflix
August 09, 2022 | Sheraz naseer
• #### Apr Car Loan Calculator in new york 2022
August 09, 2022 | Fahad Nawaz
• #### Freshman Year College Resume FROM
August 09, 2022 | Bushra Tufail
• #### Mot Future Stars Basketball Camp'
August 09, 2022 | Muhammad Waseem |
# Measures of overlap
EcologicalNetworks.overlapFunction
overlap(N::T; dims=dims::Union{Nothing,Integer}=nothing) where {T <: BipartiteNetwork}
Returns the overlap graph for a bipartite network. The dims keyword argument can be 1 (default; overlap between top-level species) or 2 (overlap between bottom-level species). See the documentation for ?overlap for the output format.
source
overlap(N::T; dims::Union{Nothing,Integer}=nothing) where {T <: UnipartiteNetwork}
Returns the overlap graph for a unipartite network. The dims keyword argument can be 1 (overlap based on preys) or 2 (overlap based on predators), or nothing (default; overlap based on both predators and preys). The overlap is returned as a vector of named tuples, with elements pair (a tuple of species names), and overlap (the number of shared interactors). The ordering within the pair of species is unimportant, since overlap graphs are symetrical.
source
EcologicalNetworks.AJSFunction
AJS(N::T; dims::Union{Nothing,Integer}=nothing) where {T <: UnipartiteNetwork}
Additive Jaccard Similarity for pairs of species in the network. AJS varies between 0 (no common species) to 1 (same profiles). This function can be used to measure AJS based on only successors or predecessors, using the dims argument.
Note that this function uses all direct preys and predators to measure the similarity (and so does not go beyond the immediate neighbors).
References
Gao, P., Kupfer, J.A., 2015. Uncovering food web structure using a novel trophic similarity measure. Ecological Informatics 30, 110–118. https://doi.org/10.1016/j.ecoinf.2015.09.013
source
EcologicalNetworks.EAJSFunction
EAJS(N::T; dims::Union{Nothing,Integer}=nothing) where {T <: UnipartiteNetwork}
Extended Additive Jaccard Similarity for pairs of species in the network. AJS varies between 0 (no common species) to 1 (same profiles). This function can be used to measure AJS based on only successors or predecessors, using the dims argument.
Note that this function counts all interactions up to a distance of 50 to define the neighbourhood of a species. This should be more than sufficient for most ecological networks.
References
Gao, P., Kupfer, J.A., 2015. Uncovering food web structure using a novel trophic similarity measure. Ecological Informatics 30, 110–118. https://doi.org/10.1016/j.ecoinf.2015.09.013
source |
Calculating the following integrals
• April 19th 2010, 07:58 PM
thedoctor818
Calculating the following integrals
Hullo,
I need some help calculating the following integrals:
$\lim_{x\to 0}\; \dfrac{1}{x} \int_0^x\; e^{t^2}\; dt
$
and this one:
$\lim_{h\to 0}\; \dfrac{1}{h} \int_3^{3+h}\; e^{t^2}\; dt
$
using the FTC.
Any help would be appreciated.
-the Doctor
• April 19th 2010, 08:33 PM
southprkfan1
Quote:
Originally Posted by thedoctor818
Hullo,
I need some help calculating the following integrals:
$\lim_{x\to 0}\; \dfrac{1}{x} \int_0^x\; e^{t^2}\; dt
$
and this one:
$\lim_{h\to 0}\; \dfrac{1}{h} \int_3^{3+h}\; e^{t^2}\; dt
$
using the FTC.
Any help would be appreciated.
-the Doctor
Both these problems have very similar solutions.
Here's a way to start
Let $F(x) = \int_0^{x}\; e^{t^2}\; dt$
Notice that (for part 2) $\frac{F(x+h) - F(x)}{h} = \frac{\int_0^{x+h}\; e^{t^2}\; dt - \int_0^{x}\; e^{t^2}\; dt}{h} = \frac{1}{h}\int_x^{x+h}\; e^{t^2}\; dt$...
• April 19th 2010, 08:56 PM
thedoctor818
I am sorry, but am still a bit confused. So, for the first integral, am I just taking $lim_{x\to0}\dfrac{F(x)}{x} ?$ If so , how do I evaluate since there is no 'elementary' antiderivative. And in the second, am I taking $lim_{h\to0}\dfrac{F(x+h)-F(x)}{h} ?$
Sorry, but I am still somewhat confused.
-Michael
• April 20th 2010, 07:44 AM
southprkfan1
Quote:
Originally Posted by thedoctor818
And in the second, am I taking $lim_{h\to0}\dfrac{F(x+h)-F(x)}{h} ?$
Yes, that should look like something familiar.
Quote:
So, for the first integral, am I just taking http://www.mathhelpforum.com/math-he...6a2d58b9-1.gif
Yes, but note that F(0) = 0, so we can rewrite it as:
$\dfrac{F(x)}{x} = \dfrac{F(x)-F(0)}{x-0}$
• April 20th 2010, 11:14 AM
thedoctor818
Thanks a bunch - that really helped out.
-Michael |
### Identity Based Online/Offline Signcryption Scheme
S. Sharmila Deva Selvi, S. Sree Vivek, and C. Pandu Rangan
##### Abstract
Online/Offline signcryption is a cryptographic primitive where the signcryption process is divided into two phases - online and offline phase. Most of the computations are carried out offline (where the message and the receiver identity are unavailable). The online phase does not require any heavy computations like pairing, multiplication on elliptic curves and is very efficient. To the best of our knowledge there exists three online/offline signcryption schemes in the literature : we propose various attacks on all the existing schemes. Then, we give the first efficient and provably secure identity based online/offline signcryption scheme. We formally prove the security of the new scheme in the random oracle model \cite{BellareR93}. The main advantage of the new scheme is, it does not require the knowledge of message or receiver during the offline phase. This property is very useful since it is not required to pre-compute offline signcryption for different receivers based on the anticipated receivers during the offline phase. Hence, any value generated during the offline phase can be used during the online phase to signcrypt the message to a receiver during the online phase. This helps in reducing the number of values stored during the offline phase. To the best of our knowledge, the scheme in this paper is the first provably secure scheme with this property.
Available format(s)
Category
Public-key cryptography
Publication info
Published elsewhere. Unknown where it was published
Keywords
Online Offline SigncryptionIdentity Based CryptographyConfidentialityUnforgeabilityRandom Oracle Model
Contact author(s)
sharmioshin @ gmail com
ssreevivek @ gmail com
History
Short URL
https://ia.cr/2010/376
CC BY
BibTeX
@misc{cryptoeprint:2010/376,
author = {S. Sharmila Deva Selvi and S. Sree Vivek and C. Pandu Rangan},
title = {Identity Based Online/Offline Signcryption Scheme},
howpublished = {Cryptology ePrint Archive, Paper 2010/376},
year = {2010},
note = {\url{https://eprint.iacr.org/2010/376}},
url = {https://eprint.iacr.org/2010/376}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content. |
One satellite, two planets and movement
1. Aug 5, 2013
Rapidrain
I am trying to write a program to show the flight of a satellite in the neighbourhood of two large planets. In all of this the mass of the satellite is negligible.
I have the potential energy from planet1 = pe1 and
the potential energy from planet2 = pe2 and
the kinetic energy of the satellite = ke
Using the sum of the two planets' acc vectors to create a !! single !! acc vector I can calculate the next position using the current position, the velocity vector and the movement caused by the !! single !! acc vector.
This is good; (it works fine in a single planet and satellite model).
The new velocity vector can also be similarly deduced adding the induced velocity from the acc vector to the original velocity vector.
This is also good; (it also works fine in a single planet and satellite model).
However Total Energy is just a bit off. Using my model with a short sliver of time I have a decrease of total energy by a factor 6.5 * 10**-4. Not a really big number but I want to find how I can reduce it to 0.0.
I have three possibilities of tweaking the model to reach change in TE = 0.0 :
1. only increase the velocity and thereby the kinetic energy
2. only increase the distance from the two planets and thereby the potential energy
3. increase both vel and dist (in a certain proportion) to increase both KE and PE
Does physics, nature, mathematics or logic define which of these three paths to explore?
2. Aug 5, 2013
voko
This is known as Euler's three body problem. I suggest you loop that up and think whether you really need to do what you are doing.
3. Aug 5, 2013
Rapidrain
Sorry voko, but I don't understand what you mean by "loop that up".
And really need to do what I am doing? Please explain.
4. Aug 5, 2013
voko
Find the information on Euler's three body problem. Wikipedia has a page on that. If English is not your native language, you may want to search for the information in your language.
5. Aug 5, 2013
Rapidrain
Again Voko, what do you mean by 'loop that up'? Is this the designation of how one solves Euler's three bodies?
6. Aug 5, 2013
voko
"Look that up" = "find that information". Do not re-invent the wheel.
7. Aug 5, 2013
D H
Staff Emeritus
Also known as "the problem of two fixed centers".
That, however, is not the cause Rapidrain's problem. The issue is how position and velocity are being updated. What follows is a very brief tutorial in numerical techniques to solve an ordinary differential equation (ODE).
First off, Rapidrain, you are trying to solve what's called a second order initial value problem. Second order means you have first (velocity) and second (acceleration) derivatives, initial value means you know the position and velocity at the start time and want to find them at some end time.
First order ODE techniques
A large number of techniques for solving first order initial value problems exist. You can take advantage of these by converting this second order ODE to a first order ODE. Any second order ODE can be re-expressed as a first order ODE by creating a doubled-up state vector that comprises the zeroth and first derivatives. For example, $\dot x(t) = v(t), \ddot x(t) = a(t)$ becomes $u(t) = (x(t), v(t)), \dot u(t) = (v(t), a(t))$.
The simplest first order ODE solver is Euler's method: $u(t+\Delta t) = u(t) + \Delta t\, \dot u(t)$. You should never use Euler's method. However, it is important to understand how it works because almost every other integration technique can be viewed as making smarter Euler-type steps.
For a second order ODE, Euler's method becomes
\begin{aligned} \vec x(t+\Delta t) &= \vec x(t) + \Delta t \, \vec v(t) \\ \vec v(t+\Delta t) &= \vec v(t) + \Delta t \, \vec a(t) \end{aligned}
There are a slew of first order ODE solvers that are far better than Euler's method. Runge-Kutta integrators take a number of intermediate steps between t and t+Δt before arriving at an estimate for u(t+Δt). Predictor/corrector methods keep a history of old values so that it can predict u(t+Δt) using one algorithm and the correct it using another. Google Runge-Kutta, multistep method, and predictor-corrector for more info.
Second order ODE techniques
An alternate approach is to take advantage of the fact that this is a second order problem that you are trying to solve. The equivalent of Euler's method for a second order ODE is to take steps via
\begin{aligned} \vec v(t+\Delta t) &= \vec v(t) + \Delta t \, \vec a(t) \\ \vec x(t+\Delta t) &= \vec x(t) + \Delta t \, \vec v(t+\Delta t) \end{aligned}
This is called the Euler-Cromer method, the symplectic Euler method, plus a whole bunch of other names. The only difference between this approach and the basic Euler method is the order in which position and velocity are updated. Simply switching to updating velocity first makes a *huge* difference. The basic Euler method doesn't even come close to conserving energy. This approach does.
However, Euler-Cromer is still lousy. A simple mod to this approach is to offset the calculation of position and velocity by half a time step. This is what leapfrog, position verlet, and velocity verlet integration do. Google these names for more info. Even more advanced are the Gauss-Jackson techniques.
I'd suggest trying a variant of position verlet. You'll have to bootstrap this by computing the acceleration vector at t=0.
\begin{aligned} \vec x(t+\Delta t/2) &= \vec x(t) + \frac 1 2 \Delta t \, \vec v(t) \\ \vec v(t+\Delta t/2) &= \vec v(t) + \frac 1 2 \Delta t \, \vec a \\ & \text{compute and save midpoint acceleration}\,\vec a = f(\vec x(t+\Delta t/2)) \\ \vec v(t+\Delta t) &= \vec v(t+\Delta t/2) + \frac 1 2 \Delta t \, \vec a \\ \vec x(t+\Delta t) &= \vec x(t+\Delta t/2) + \frac 1 2 \Delta t \, \vec v(t+\Delta t) \end{aligned}
This is no more expensive computationally than Euler-Cromer (the expense is typically in the derivative computations) but it is far more accurate.
8. Aug 5, 2013
voko
As you most certainly know, solving ODEs might be wholly unnecessary in this problem. Which would eliminate the problem entirely. That is the whole point behind my urging Rapidrain to study the classical approach.
9. Aug 5, 2013
Rapidrain
Very good DH. This helps much more than "go look it up".
Question though : your equations show : x(t + del*t) = x(t) + del*t*v(t)
shouldn't the right side also have the distance covered by acceleration :
x(t) + del*t*v(t) + (1/2)*acc(t)*(del*t)**2 ??
I'll give your algorithm a try. |
# Limit of derivative function
$f:\Bbb{R}\to\Bbb{R},$ $f$ continuous, defined by $xf(x) = e^x-1$ then $$\lim _{n\to \infty }nf^{\left(n\right)}\left(x\right)= ?$$ I've tried to calculate $f'(x), f''(x)$ and $f'''(x)$ but I didn't find any pattern.
• How familiar are you with Taylor series? Jan 2, 2017 at 17:57
• @Arthur Not at all... Haven't learned about them until now. If you could give me a hint using another method it would be appreciated. Jan 2, 2017 at 18:00
• Is the fact that, for $n \geq 0$, $x f^{(n+1)} = e^x - (n+1) f^{(n)}$ of any help here? Jan 2, 2017 at 18:57
• @Dmoreno It allows you to identify the limit, if you know it exists. If $nf^{(n)}$ converges to some finite value for each $x$ then, obviously, $f^{(n)}$ converges to $0$. So from the formula it follows that the limit is $e^x$, if it exists. Jan 2, 2017 at 19:46
• @Liviu if you now that for fixed $x$, $nf^{(n)}(x)$ converges to some finite value $a$, then $f^{(n)}(x)$ behaves like $a/n$ which converges to $0$. So will $xf^{(n)}(x)$ (for the given $x$), and the limit of $nf^{(n)}(x)$ will coincide with the limit of $(n+1)f^{(n)}(x)$. Now use these results in the formula from Dmorenos comment. The missing link is the proof for the existence of the limit. Jan 2, 2017 at 20:14
Though you may not be very familiar with Taylor's theorem, I intend to post this answer so that you may see how powerful it is. Notice that
$$e^x=\sum_{k=0}^\infty\frac{x^k}{k!}$$
And thus,
$$f(x)=\sum_{k=0}^\infty\frac{x^k}{(k+1)(k!)}$$
$$f^{(n)}(x)=\sum_{k=0}^\infty\frac{x^k}{(k+1+n)(k!)}$$
$$nf^{(n)}(x)=\sum_{k=0}^\infty\frac{nx^k}{(k+1+n)(k!)}$$
And as $n\to\infty$, we end up with (quite nicely, if we take the limit through the sum)
$$\lim_{n\to\infty}nf^{(n)}(x)=\sum_{k=0}^\infty\frac{x^k}{k!}=e^x$$
A quick check says this should be right.
To take the limit through the sum, you should use uniform convergence, which follows through quite easily.
• In fact, it is easy to show that the limit is $e^x$ without appealing to uniform convergence or the dominated convergence theorem. Jan 2, 2017 at 19:28
• @Dr.MV Actually, it was not as simple as I may have liked, so I think I'll leave it this way. Jan 2, 2017 at 19:43
• Let $\epsilon >0$ be given. Choose $N$ large enough so that $\sum_{k=N}^\infty \frac{k+1}{n+k+1}\frac{x^k}{k!}<\epsilon/2$. Then, with that fixed $N$, take $n$ so large that the finite sum $\sum_{k=0}^ {N-1} \frac{k+1}{k+1+n}\frac{x^k}{k!}<\epsilon/2$. Jan 2, 2017 at 19:50 |
Contents
# Camber / Cross- Slope:
• It is the slope provide to the road surface in the transverse direction to drain the rainwater from the road surface to avoid the following:
1. Stripping of bitumen from the aggregate in the presence of the water.
2. In order to avoid swelling of sub grade in case water seeps up to it.
3. The slipping of the vehicle over the wet pavement.
4. Glare over the wet pavement surface.
• On straight Road it is provide by raising the center of carriage way with respect to edges forming the crown on the highest point along the center line.
• At horizontal curve with super elevation, the surface drainage is provided by raising the outer edge.
• It is represented in any of the following ways:
1. As percentage :For example cross-section 5% tanθ = $$\frac{5}{100}$$.
2. As fraction : For example cross- section = 1 in 20 = $$\frac{1}{20}$$ = 0.05.
• The value of cross-section depend on following factor:
1. Type a pavement surface
2. Amount of rainfall
## Type of Cross- Slope / Camber
3. ### Composite Cαmber
Note: For cement concrete pavement straight line camber is preferred. As it is easier to lay.
# Width of Pavement or Carriageway:
• Width of payment depends upon width of traffic, lane and number of lanes.
• Width of traffic lane is decided on the basic the type of vehicle moving on it, along with same clearance in both the sides.
• Passenger car is considered as a standard vehicle to decide the width of carriageways. (Width of passenger car = 2.44 meter= 2.5 meter)
• For rural highway, if pavement has two or more lane, width of single lane is 3.5 meter.
• The number of lane to be provided depends upon traffic volume.
• The width of carriageway for different conditions are follow.
For Detailed Analysis of Highway Engineering Step By Step.
Highway Engineering
Scroll to Top |
1. ## algebra
4. Mr Benny is now 3 times as old as his son. In 13 years time, he will be twice as old as his son. Find Mr Benny's present age.
5. the sum of the present ages of Mrs Peel and her daughter Emma is 50 years. In 5 years tiME Mrs Peel will be 3 times as old as Emma. How old is Emma now?
2. Originally Posted by alberta
4. Mr Benny is now 3 times as old as his son. In 13 years time, he will be twice as old as his son. Find Mr Benny's present age.
Let the son's age be x. Then Mr. Benny's age is currently 3x. In 13 years, the son's age is x + 13 and Mr. Benny's age is 3x + 13. The problem says that Mr. Benny's age in 13 years is twice his son's age in 13 years, so:
3x + 13 = 2(x + 13)
And now solve for x. (I got x = 13, so Mr. Benny's age present age is 3*13 = 39.)
-Dan
3. Originally Posted by alberta
5. the sum of the present ages of Mrs Peel and her daughter Emma is 50 years. In 5 years tiME Mrs Peel will be 3 times as old as Emma. How old is Emma now?
Try this the same way as I set up the last one. Call Emma's age x. What is Mrs Peel's age right now? What is Emma's age in 5 years? What is Mrs Peel's age in 5 years? Use this to set up the second sentence as an equation. If you need more help, just let us know. (Note: I got x = 10.)
-Dan
4. Hello, Alberta!
For years, I used a chart to organize the information.
I'll do the first one . . .
4. Mr. Benny is now 3 times as old as his son.
In 13 years time, he will be twice as old as his son.
Find Mr. Benny's present age.
In the chart, make a row for each person.
Code:
- - - - - + - - - - - - - -
Mr. Benny |
- - - - - + - - - - - - - -
son |
- - - - - + - - - - - - - -
Make a column for each time period: "Now" and some other time.
Code:
| Now | +13 yrs |
- - - - - + - - + - - - - +
Mr. Benny | | |
- - - - - + - - + - - - - +
son | | |
- - - - - + - - + - - - - +
Mr. Benny is 3 times his son's age.
. . Let $x$ = son's age (now).
. . Let $3x$ = Mr. Benny's age (now)
Write those in the "Now" column.
Code:
| Now | +13 yrs |
- - - - - + - - + - - - - +
Mr. Benny | 3x | |
- - - - - + - - + - - - - +
son | x | |
- - - - - + - - + - - - - +
13 years in the future, both will be 13 years older.
. . Mr. Benny will be $3x + 13$ years old.
. . His son will be $x + 13$ years old.
Write those in the "+13 years" column.
Code:
| Now | +13 yrs |
- - - - - + - - + - - - - +
Mr. Benny | 3x | 3x + 13 |
- - - - - + - - + - - - - +
son | x | x + 13 |
- - - - - + - - + - - - - +
Our equation come from the second column.
In 13 years: . $\underbrace{\text{Mr. Benny}}_\downarrow \:\underbrace{\text{will be}}_\downarrow\: \underbrace{\text{twice}}_\downarrow\:\underbrace{ \text{son's age}}_\downarrow$
. . . . . . . . . . . $3x + 13\quad\;\; = \quad\;\;\: 2 \:\times \;(x + 13)$
And there is our equation!
Solve for $x$, but don't forget:
. . they asked for Mr. Benny's age $(3x)$.
5. Neat idea. I'll have to remember that!
-Dan
6. i neva saw how easy it woz..it took me all day...thanx 2 both of yall
-chart is a good idea- |
# Why is any compact metric space the union of a countable set a subset which is a perfect space under the induced topology?
Recently I stumbled across the fact that any compact metric space is the union of some countable set and a subset that, when given the induced topology, is a perfect space.
Can anyone provide a proof or reference to a proof of this fact? Thanks.
-
– lentic catachresis Sep 27 '11 at 22:49
Note that the set of points with a countable neighborhood is itself both countable and open. Its complement is closed and perfect. – George Lowther Sep 27 '11 at 23:54
A compact metric space is second countable (i.e. has a countable basis): an open cover consisting of all open balls with radius $1/n$ has a finite subcover $\mathcal{B}_n$ by compactness, and one can show that the union of all these $\mathcal{B}_n,n\in\mathbb{N}$ is a countable basis.
Now that we know that a compact metric space is second countable, we can use the following proposition which can be proven using the hint given in the comments by George Lowther.
Every second countable space is a union of a countable set and a perfect set.
Proof: Let $X$ be a second countable space. Pick a countable basis $\mathcal{B}$ of $X$. Denote $$C=\{x\in X:x\text{ has a countable neighbourhood}\}\quad\text{and}\quad P=X\setminus C.$$ Let us show that $C$ is countable. For each $x\in C$ choose a countable neighbourhood $B_x\in\mathcal{B}$ of $x$. For each $B\in\mathcal{B}$ denote $C_B=\{x\in C:B_x=B\}$. Since $x\in B_x$ and $B_x$ is countable for each $x\in C$, each $C_B$ is countable too. Now $C=\bigcup_{B\in\mathcal{B}}C_B$ is countable, being a countable union of countable sets.
Note that any open and countable neighbourhood $U$ of a point $x\in C$ is a subset of $C$, since the same $U$ is a countable neighbourhood of each point of $U$. Since each point of $C$ has such a neighbourhood, $C$ is open and $P$ is closed.
Lastly let us show that $P$ does not have isolated points in itself (i.e. $P$ is dense-it-itself). Pick $x\in P$ and a neighbourhood $U$ of $x$. $U$ is uncountable by definition of $C$, so $C$ being countable, $U\setminus(C\cup\{x\})\subseteq P$ is nonempty and thus $x$ is not an isolated point in $P$.
We have shown that $X=C\cup P$, where $C$ is countable and $P$ is perfect. $\square$
By the way, a similar argument can be used to prove that every space having a basis of cardinality $\kappa$ is a union of a set of cardinality $\kappa$ and a perfect set.
-
This is of course correct (with mild choice assumptions), but in my opinion the tricky part of this problem is that compact metric spaces are automatically separable (equiv., second countable). This solution glosses over that step. – user83827 Sep 28 '11 at 15:55
@ccc: Yes, somewhere along the way I managed to forget that the question was about compact rather than separable metric spaces. I added a few words indicating how one can show that compact metric implies second countable. – LostInMath Sep 28 '11 at 17:02
Thanks LostInMath! – Pierre R. Sep 28 '11 at 19:10
Every compact metric space is second countable (see, e.g. this proof on PlanetMath). In the book "Basic real analysis" by Sohrab you can find the theorem attributed to Cantor-Bendixon (sic):
Let $(M,d)$ be a second countable metric space and let $F\subset M$ be any closed subset. Then $F=P\cup C$ where $P\subset M$ is perfect and $C\subset M$ is countable.
The theorem thus stated is more general. If you take $(M,d)$ to be compact, then by the first observation, and taking $F=M$ you get the desired result. |
# Boardman Vogt W construction for modules over an operad
The W construction of Boardman and Vogt gives a cofibrant replacement for operads. In http://arxiv.org/abs/math/9907073, Salvatore describes a cofibrant replacement for algebras over an operad. Is there a similar construction which produces cofibrant resolutions of right or left modules over an operad? Does anyone have know of a reference for this?
-
What do you mean by module over an operad? I've heard of modules over an algebra over an operad, and these have a model structure, so they have cofibrant replacement. Every operad is an algebra over a particular coloured operad, so if you take the model structure on modules over that algebra it should answer your question. References: ncatlab.org/nlab/show/…, arxiv.org/abs/math.CT/0701767 – David White May 30 '12 at 3:33
Nevermind, I found a reference which defines modules over an operad. It also kinda answers your question for left modules, since it produces a model structure on left modules over any non-$\Sigma$ operad and on some $\Sigma$-operads. This is John Harper's Homotopy Theory of Modules Over Operads. I'm not sure if it's what you wanted, since it's not very constructive: math.uwo.ca/~jharpe9/ModulesOperadsMonoidal.pdf – David White May 31 '12 at 14:01 |
" /> -->
#### Consumption and Investment Functions Model Question Paper
12th Standard EM
Reg.No. :
•
•
•
•
•
•
Economics
Time : 02:00:00 Hrs
Total Marks : 50
6 x 1 = 6
1. The average propensity to consume is measured by
(a)
C/Y
(b)
CxY
(c)
Y/C
(d)
C+Y
2. If the Keynesian consumption function is C=10+0.8 Y then, and disposable income is Rs.100, what is the average propensity to consume?
(a)
Rs.0.8
(b)
Rs.800
(c)
Rs.810
(d)
Rs.0.9
3. Lower interest rates are likely to :
(a)
Decrease in consumption
(b)
increase cost of borrowing
(c)
Encourage saving
(d)
increase borrowing and spending
4. The sum of the MPC and MPS is___________
(a)
1
(b)
2
(c)
0.1
(d)
1.1
5. As income increases, consumption will______
(a)
fall
(b)
not change
(c)
fluctuate
(d)
increase
6. According to Keynes, investment is a function of the MEC and _________
(a)
Demand
(b)
Supply
(c)
Income
(d)
Rate of interest
7. 5 x 1 = 5
8. APS
9. (1)
Average propensity to consume
10. MPS
11. (2)
S/Y
12. APC
13. (3)
f(r)
14. I
15. (4)
ΔS/ΔY
16. Induced Investment
17. (5)
Profit motive
1 x 2 = 2
18. Assertion (A) : Keynes propounded the fundamental psychological Law of Consumption.
Reason (R) : The people to spend on consumption less than the full increment of income.
(a) Both A and R are true and R is not the correct explanation of A
(b) Both A and R are true and R is the correct explanation of A
(c) R is true but A is false
(d) A is true but R is false
19. 2 x 2 = 4
20. The Accelerator Principle
(a) β = $\frac{ΔI}{ΔC}$
(b) Ratio between induced investment and an initial change in consumption.
(c) Further developed by Hicks Samuelson and Harrod.
(d) First introduced by - J.M. Keynes
21. Subjective factors of consumption function
(a) The motive of precaution
(b) Income distribution
(c) Foresight
(d) The motive of pride
22. 2 x 2 = 4
23. (a) Accelerator model J.M.Clark (b) Multiplier was developed by R.F.Khan (c) MEC Duesenberry (d) Investment Multiplier J.M. Keynes
24. (a) Multiplier (K) ΔI/ΔY (b) MPS ΔS/ΔY (c) Accelerator (β) ΔI/ΔC (d) APC C/Y
25. 5 x 2 = 10
26. What is consumption function?
27. Define average propensity to consume (APC).
28. What do you mean by propensity to save?
29. Define Marginal Propensity to Save (MPS).
30. Define Accelerator
31. 3 x 3 = 9
32. State the propositions of Keynes’s Psychological Law of Consumption
33. Explain any three subjective and objective factors influencing the consumption function.
34. Specify the limitations of the multiplier.
35. 2 x 5 = 10
36. Explain Keynes psychological law of consumption f.unction with diagram.
37. What are the differences between MEC and MEI. |
×
Get Full Access to Chemistry: The Central Science - 14 Edition - Chapter 6 - Problem 6.5
Get Full Access to Chemistry: The Central Science - 14 Edition - Chapter 6 - Problem 6.5
×
# ?The familiar phenomenon of a rainbow results from the diffraction of sunlight through raindrops. (a) Does the wavelength of light increase o
ISBN: 9780134414232 1274
## Solution for problem 6.5 Chapter 6
Chemistry: The Central Science | 14th Edition
• Textbook Solutions
• 2901 Step-by-step solutions solved by professors and subject experts
• Get 24/7 help from StudySoup virtual teaching assistants
Chemistry: The Central Science | 14th Edition
4 5 1 282 Reviews
14
4
Problem 6.5
The familiar phenomenon of a rainbow results from the diffraction of sunlight through raindrops.
(a) Does the wavelength of light increase or decrease as we proceed outward from the innermost band of the rainbow?
(b) Does the frequency of light increase or decrease as we proceed outward? [Section 6.3]
Step-by-Step Solution:
Step 1 of 5) Does the wavelength of light increase or decrease as we proceed outward from the innermost band of the rainbow Does the frequency of light increase or decrease as we proceed outward The electrons involved in chemical bonding are the valence electrons, which, for most atoms, are those in the outermost occupied shell. (Section 6.8) The American chemist G. N. Lewis (1875–1946) suggested a simple way of showing the valence electrons in an atom and tracking them during bond formation, using what are now known as either Lewis electron-dot symbols or simply Lewis symbols. The Lewis symbol for an element consists of the element’s chemical symbol plus a dot for each valence electron. Sulfur, for example, has the electron configuration 3Ne43s23p4 and therefore six valence electrons.
Step 2 of 2
##### ISBN: 9780134414232
The full step-by-step solution to problem: 6.5 from chapter: 6 was answered by , our top Chemistry solution expert on 10/03/18, 06:29PM. Since the solution to 6.5 from 6 chapter was answered, more than 213 students have viewed the full step-by-step answer. This textbook survival guide was created for the textbook: Chemistry: The Central Science, edition: 14. This full solution covers the following key subjects: . This expansive textbook survival guide covers 29 chapters, and 2820 solutions. The answer to “?The familiar phenomenon of a rainbow results from the diffraction of sunlight through raindrops. (a) Does the wavelength of light increase or decrease as we proceed outward from the innermost band of the rainbow? (b) Does the frequency of light increase or decrease as we proceed outward? [Section 6.3]” is broken down into a number of easy to follow steps, and 49 words. Chemistry: The Central Science was written by and is associated to the ISBN: 9780134414232.
## Discover and learn what students are asking
Calculus: Early Transcendental Functions : Iterated Integrals and Area in the Plane
?In Exercises 1 - 10, evaluate the integral. $$\int_{x}^{x^{2}} \frac{y}{x} d y$$
#### Related chapters
Unlock Textbook Solution |
# Is it OK not to think about Meaning of Life ?
If I live a normal live, I work hard, earn $$to support my life, never worry, happy-go-lucky, take it easy for everything in my life, eat-drink-sleep happily everyday, never think about the meaning of life. Is it OK to be such a human? Or, I'm just like an animal ? ## Answers and Replies Sunfist I feel like you are searching for something. I also worry that none of us can tell you what it is. Listen, if that's how you want to live your life, live it that way! It's your life! Don't let someone else tell you what you should be doing with it. You should be able to look inside yourself and decide what you want. If that is the path of the stoic ritious, constantly doing what you can for God, then do that. If it's to work hard, provide for your children, etc. then do that. Maybe you should look into Zen. I think you would like it, if you could find someone to help teach you. BoulderHead Originally posted by Saint If I live a normal live, I work hard, earn$$$to support my life, never worry, happy-go-lucky, take it easy for everything in my life, eat-drink-sleep happily everyday, never think about the meaning of life. Is it OK to be such a human? Or, I'm just like an animal ? Depends strictly on which political party you vote for. Zero I don't ever think about the **MEANING OF LIFE**..but I do think about my life, and what it means to me. I have to agree with sunfist. The mere fact that you're questioning yourself has meaning. You're looking for him, and when you find him, you'll realize it wasn't him you were seeking at all- It's the question that drives us-you know of what I speak. Ok so I liked the matrix But seriously, if you're asking that question, it means you're not satisfied with the answer you have. You've come to a point in your life, and you know how you got there, but you don't know why. The answer isn't as simple as math or physics- those questions have static answers, and it's the same no matter what. The question your asking is one you have to answer, because only you can give it meaning. One man's enlightenment is another man's wasted afternoon. For me, I spent many years like a rat in a maze, chasing the cheese. And as anyone know, the rat never gets the cheese, he only runs to the end, or collapses from the effort. I found that I was running so hard to catch the cheese that I left what was important behind. So I turned around and went back to where I left off before the chase began. Now I'm making my own path instead of following the one that leads to the dead-end. I don't know if that makes much sense to you, but that's my analogy, for what it's worth. Noone can tell you the answer, because the answer will only have meaning to you. All we can do is show you the door- you have to walk through it Sunfist It is good to have an end to journey toward; but it is the journey that matters, in the end. -Ursula K. Le Guin Do not seek to follow in the footsteps of men of old; seek what they sought. -Basho Seek. Just seek. Don't worry if you wind up somewhere other than where you were going. Zero And, when in doubt, follow the white rabbit...(Sorry, watching Matrix Reloaded...*grins*) amadeus Originally posted by Saint If I live a normal live, I work hard, earn $$to support my life, never worry, happy-go-lucky, take it easy for everything in my life, eat-drink-sleep happily everyday, ... If you find out there's more to it, would you please let us know? Is it OK to be such a human? Or, I'm just like an animal ? In what sense is "human" different from "just like an animal"? There's nothing wrong in living without asking dumb questions. "The meaning of life" is the same as "the meaning of the English language". English has no meaning, the concept of meaning doesn't even apply to a language. But you can use English to construct meaningful sentences. It's the same with life. Life has no meaning in itself, but you create meaning by choosing how to live. And that, like speaking, is ultimately a free and personal task. So the question is not whether you should think about the meaning of life, for that is nonsense. The real question is, are you thinking enough about the meaning of everything you do? Because you may find some of the stuff you do means nothing to you or anyone else. Saint, while introspection and self-consciousness (which give rise to questioning the "meaning of life") are qualities that are unique to humans, that doesn't mean that a human stops being unique if they don't take full advantage of such qualities. We all have different purposes to our lives. The fact that you may choose not to search for yours, doesn't mean that you don't have one, it just means that yours doesn't require searching for. That's as deep as I can get...now to swim back to shallow water . At least as important as the meaning of life, is the meaning of the gift of a question. Sincere questions come from the heart and don't demand answers. They are our spontanious unconditional gifts to ourselves and the world around us. Why is the sky blue? Where do I come from? All emerging as if out of the mouths of babes.... Curiously enough, as with a child again, the fewer the questions and ansers we know, the more spontaniously they emerge. When I was young the birds in the trees seemed to play with me, and I never doubted their joy. The meaning of life was its miracle I knew, with out questions, doubt, or reservation. What is the meaning of life? What is the meaning of anything? Everywhere you go, there you are. To see a world in a grain of sand, and heaven in a wild flower Infinity in the palm of your hand and eternity in an hour William Blake Last edited: i just put that all out of my mind and then make up a meaning of life...like live breed die to make room for more of my specices...but i hate this meaing of life so just to make sure I live happly i am ogign with live die I agree with Zero that "meaning of life" is meaningless, what counts is "meaning of my life" Originally posted by Saint I agree with Zero that "meaning of life" is meaningless, what counts is "meaning of my life" I don't necessarily interpret the two as being different. Depends on how you look at the original question from Saint I guess. If I live a normal live, I work hard, earn$$$ to support my life, never worry, happy-go-lucky, take it easy for everything in my life, eat-drink-sleep happily everyday, never think about the meaning of life.
Yes it is ok. As a matter of fact I envy the person that can do this! I always think twice before I inflict my questions on unsuspecting happy people . The search for knowledge and meaning is absolute torture at times. It's especially hard for people open to a meaning outside of themselves, which seems to be rare judging from most of the responses here.
Saint,
Is it wrong to feel that way? no. Can we guarantee you'll never question life EVER? not a chance. I was much like you too. Everyone has a point in thier lives where they come to question everything. That time isn't now for you. Just enjoy yourself and don't worry
Originally posted by wuliheron
At least as important as the meaning of life, is the meaning of the gift of a question. Sincere questions come from the heart and don't demand answers. They are our spontanious unconditional gifts to ourselves and the world around us. Why is the sky blue? Where do I come from? All emerging as if out of the mouths of babes....
Curiously enough, as with a child again, the fewer the questions and ansers we know, the more spontaniously they emerge. When I was young the birds in the trees seemed to play with me, and I never doubted their joy. The meaning of life was its miracle I knew, with out questions, doubt, or reservation.
What is the meaning of life? What is the meaning of anything? Everywhere you go, there you are.
Very eloquently put (as always), Wuliheron. |
# What is the domain and range of y=x^4+x^2-2 ?
Mar 31, 2017
Domain: $\left(- \infty , \infty\right)$
Range: $\left[- 2 , \infty\right)$
#### Explanation:
$f \left(x\right) = {x}^{4} + {x}^{2} - 2$
The domain of polynomial equations is $x \in \left(- \infty , \infty\right)$
Since this is equation has an even highest degree of 4, the lower bound of the range can be found by determining the absolute minimum of the graph. The upper bound is $\infty$.
$f ' \left(x\right) = 4 {x}^{3} + 2 x$
$f ' \left(x\right) = 2 \left(x\right) \left({x}^{2} + 1\right)$
$0 = f ' \left(x\right)$
$0 = 2 \left(x\right) \left({x}^{2} + 1\right)$
$x = 0$
$f \left(0\right) = - 2$
Range:$\left[- 2 , \infty\right]$ |
# Good reference for metric topology
I would like a "good" book (not really introductory, not too advanced with good theory and exercises) on metric topology covering the following topics:
Metric spaces, open/closed sets, sequences, compactness, completeness, continuous functions and homeomorphisms, connectedness, product spaces, Baire category theorem, completeness of C[0, 1] and Lp spaces, Arzela-Ascoli theorem.
It may not be a full book but parts of books or lecture notes are also most welcome.
Chapter 4: Metric and metrizable spaces.... in "General Topology" , by R. Engelking. You need to browse the Introduction to familiarize yourself with his notation (e.g. $\{x_i\}$ is a sequence), and be aware that for him, compact space means compact Hausdorff.... Chapter 3 is Compact spaces. It includes a beautiful short proof of the Stone-Weierstrass Theorem ( see 3.2.18 thru 3.2.21).... There is a wealth of extra material in the problems and exercises of this book.... But for $C[0,1], L^p,$ and Arzela-Ascoli, etc., I would suggest a text on functional analysis. |
### Home > CCG > Chapter 7 > Lesson 7.2.5 > Problem7-102
7-102.
For each pair of triangles below, determine if the triangles are congruent. If the triangles are congruent,
• complete the correspondence statement,
• state the congruence property,
• and record any other ideas you use that make your conclusion true.
Otherwise, explain why you cannot conclude that the triangles are congruent. Note that the figures are not necessarily drawn to scale.
1. $ΔABC≅\Delta\underline{\qquad}$
1. $ΔSQP ≅ Δ\underline{\qquad}$
1. $ΔPLM ≅ Δ\underline{\qquad}$
• $ΔABC≅\Delta{\underline{\ \ {ADC}\ }}$
$\text{AAS}≅$
($AC = CA$ Reflexive helps)
• $ΔSQP ≅ Δ\underline{\ \ SQR\ }$
$\text{HL} ≅$
($QS = SQ$ Reflexive helps)
• No solution: $∠L≅∠N$ and $∠LMP≅∠NMO$, but only enough information to prove similar by $\text{AA}\sim$.
1. $ΔWXY ≅ Δ\underline{\qquad}$
1. $ΔEDG ≅ Δ\underline{\qquad}$
1. $ΔABC ≅ Δ\underline{\qquad}$
• $ΔWXY ≅ Δ\underline{\ \ TZY\ }$
$\text{SAS}≅$
($WY=TY$ & $XY=ZY$ by definition of midpoint; $∠WYX=∠TYZ$ by vertical angles)
• $ΔEDG ≅ Δ\underline{\ \ GFE\ }$ you figure out why, add extra information.
• $ΔABC ≅ Δ\underline{\ \ DEF\ }$ you figure out why, add extra information. |
## View Blog | Read Bio
### What you need to know about the Higgs seminar
The upcoming Higgs seminar could be the biggest announcement in particle physics for nearly 30 years. There have been several excellent blog posts and videos explaining what the Higgs is and what it does, so I’ll link to those at the bottom of the page. What I want to do here is give you the overview of what you really need to know to get the best from the talk.
Of course you should follow along with the liveblog as well!
### What’s happening with the webcast?
CERN have put in a lot of resources for the webcast. General users can get to the webcast at http://cern.ch/webcast. If you have a CERN login you can use a second webcast at http://cern.ch/webcast/cern_users.
The webcast will start around 09:00 CST (that’s 00:00 US West Coast, 03:00 US East Coast, 08:00 UK, and 17:00 Melbourne.
### What is the Higgs boson? What does it do?
The Higgs boson is part of the Standard Model of particle physics. The Standard Model includes the quarks and leptons (which make up all the matter see around us) and the photon, gluons, and $$W$$ and $$Z$$ boson (which carry all the forces in nature, except for gravity.) Three of these particles, the $$W^+$$, $$W^-$$ and $$Z$$ bosons, have mass, but according to our framework of physics, they should not have mass, unless the Higgs boson exists. The Standard Model of physics predicts that the $$W$$, $$Z$$, photon and Higgs all come as a package and they are all related to each other. If we don’t see a Higgs boson, we don’t understand the world around us.
People say that the Higgs boson gives particles mass, but this isn’t quite what happens. The Higgs boson allows some particles to have mass. The Higgs boson does not explain the mass that comes from binding energies (for example, most of the mass of the proton) and it does not explain the mass associated with dark matter. If the Higgs boson is discovered it will complete the Standard Model of physics, but it will not complete our picture of the universe. There will still be many unanswered questions.
### What would a discovery look like?
In order to claim a discovery an experiment would need to see a 5 sigma excess over the expected background. A sigma is a measure of uncertainty, and the chance of seeing a 5 sigma excess due to statistical fluctuations is about 1 in 3 million. If both experiments see an excess of 5 sigma in the same region the chances that this is due to a fluctuation is 1 in 9 million million!
The experiments produce “Brazil plots”, which show what they expect to see if there is no Higgs, and compare it to what they actually see. The green band shows 1 sigma deviations, the yellow bands show 2 sigma deviations, and then you have to use your imagination to see the remaining bands, and colors. When the green and yellow bands pass below the SM=1 line, and the central black line does too, then the Higgs is excluded in that region to 95% confidence. If the black line stays above the SM=1 line then we haven’t excluded the Higgs boson in that region yet. So when the green and yellow bands fall far below the SM=1 line, but the black line stays above or at the SM=1 line then we accumulate evidence for a Higgs boson.
### How do we search for the Higgs boson?
The search for the Higgs boson depends on its mass. At high mass it can decay to heavy particles with clean signatures, so the high mass region was the first region to see an exclusion. At very high mass the width of the Higgs boson is large, so the events get spread out over a large range, so the searches take a little longer. At low mass the decays get very messy, so we have to pick our decay modes carefully. The cleanest modes are the two photon mode (often called gamma gamma), the ZZ* mode and the WW* mode. Of these three, the gamma gamma and ZZ* modes are the most sensitive, so we can expect to see these presented tomorrow.
The data are collected that the detectors and stored to disk, and the physicists spend their time analyzing the data. This is a slow process, full of potential pitfalls, so the internal review process is long and stringent. This is one of the reasons why we need two experiments, so that they can check each other’s findings. The experiments at Tevatron have already presented their results and they see an excess in the same region. This is vital because they are sensitive to different final states, so between the Tevatron and the LHC we have all the analyses covered.
For each analysis there are two kinds of background, the “reducible” backgrounds where particles fake the particles we are looking for (for example, a high energy electron can look just like a high energy photon) and the “irreducible” backgrounds where particles are the same kind as the ones we are looking for. So when you see plots showing the gamma gamma searches, you can expect to see four categories: gamma gamma (irreducible Standard Model background), jet gamma, jet jet, and “other”. As we make more and more stringent requirements to eliminate these backgrounds we also lose signal events, so we have trade off background rejection against signal acceptance.
On top of all these problems we also have to take reconstruction and acceptance into account. We cannot record every event, so we pick and choose events based on how interesting they look. Does an event have two high energy photon candidates? If so, record it. Does an event have four leptons in the signal state? If so, record it. These trigger decisions are affected by definitions of “high energy”, by the algorithms we use, and by the coverage of the detectors. We have to take all of these biases into account with systematic uncertainties, and these can dominate for some of the searches.
When we put all this together we end up asking some simple questions: “How many background events do we expect?” “How many events do we see in data?” “What is the total uncertainty on the background and signal?” “How many signal events do we think we see?” “How much larger is this than the uncertainty?” This then gives us the “n sigma” for that mode across the mass range. We combine these sigmas within a single experiment, taking correlated uncertainties into account, and that’s how we get our Brazil plots.
### How likely is a discovery?
In 2011 we had about $$5fb^{-1}$$ of luminosity and we saw about 3 sigma for each experiment. In 2012 we had about $$6.5fb^{-1}$$ of luminosity at slightly higher energy (giving a factor of 1.25). So we can work out what to expect for 2012 sensitivity- just take the 3 sigma and add it in quadrature to $$(\sqrt{1.25\times 6.5/5})\times 3$$ sigma and that comes out at 4.9 sigma. If we’re lucky one or more experiments might see more than 5 sigma, meaning we could have a discovery!
### What next for the Higgs?
If we make a discovery, either now or in the coming weeks, then we need to measure the properties of the new particle. We can’t claim to have discovered the Standard Model Higgs boson until we’ve measured its branching fractions and spin. Fortunately, if the Higgs boson is at 125GeV then we have a rich variety of decay modes, and this could give us insights into all kinds of interesting measurements, such as the quark masses.
Now go and enjoy the seminar! |
# physics
## August 4, 2020
Physics news 4th August 2020
Lots of interesting things happening at FermiLab then.
“US CMS is very proud to acknowledge the significant impact made by its members in deploying innovative analysis techniques, including cutting-edge AI methods, which were critical in establishing the evidence for Higgs boson decays into a muon and antimuon pair,”
It may be an idea to have an understanding, or a copy of the standard model of particle physics to hand.
This is also being discussed on the qoto discourse instance. Where there is also an updated standard model graphic.
You may also find Science Forums a good place to discuss.
This has also prompted me to try and figure out how to typeset the basic equation in LaTeX.
\H\longrightarrow \mu^{–} + \overline{\mu^{+}} $So to quote the$\textit{news.fnal.gov}$website • Please ignore the$'s at either side of the equation, these will be removed at some point.
Related Articles
## June 25, 2020
In the night sky : Orion Completed
I have finally, after months of going back and forth to this, partly due to other study and things to do. Completed the 24 hour OU / OpenLearn Course. I was just doing this casually anyway.
Note: This is a Level 1 Open University course.
After studying this course, you should be able to:
* Understand facts, concepts, principles, theories, classifications and language used in astronomy * Understand the range of sizes, distances and motions of objects in the night sky * Understand the structure, evolution and the main processes operating in stars * Understand the properties of planets in our Solar System and exoplanetary systems * Understand the history of the universe.
Really interesting course, lots to think about and learn just from doing some of the research for the questions posed during the course.
Astronomy uses the greek alphabet for star names for example so this post may help. |
How do you solve x/(x^2-8)=2/x?
Jul 27, 2016
$x = \pm 4$
Explanation:
First, move everything from the denominator to the numerator
we do this by multiplying by the LCM of denominators on each side.
$x \left({x}^{2} - 8\right) \cdot \frac{x}{{x}^{2} - 8} = \frac{2}{x} \cdot x \left({x}^{2} - 8\right)$
the $\left({x}^{2} - 8\right)$ on the left side cancels out and the $x$ on the right side cancels out
$x \left(\cancel{{x}^{2} - 8}\right) \cdot \frac{x}{\cancel{{x}^{2} - 8}} = \frac{2}{\cancel{x}} \cdot \cancel{x} \left({x}^{2} - 8\right)$
which leaves us with:
$x \cdot x = 2 \cdot \left({x}^{2} - 8\right)$
after this we now have
${x}^{2} = 2 \left({x}^{2} - 8\right)$
next we remove the parentheses by multiplying each term by 2
now we have
${x}^{2} = 2 {x}^{2} - 16$
next we will move the 16 to the other side to avoid working with negative numbers
$16 + {x}^{2} = 2 {x}^{2}$
then we will combine like terms by subtracting ${x}^{2}$
$16 = 2 {x}^{2} - {x}^{2}$ leaving us with $16 = {x}^{2}$
then we will get rid of the ${x}^{2}$ by taking the square root of both sides
$\pm \sqrt{16}$=${\sqrt{x}}^{2}$
now we have our final answer of
$x = \pm 4$ |
# Math Help - help with the taylor series for a bonus problem
1. ## help with the taylor series for a bonus problem
hey all,
i am stuck on a math problem that no one i've talked to can figure out. the idea is i have to find a taylor series in x - a through the term (x - a)^3 for the problem:
2 - x + 3(x^2) - x^3 , a = -1
if anyone has ANY idea at all how this is done, please let me know. thank you.
2. Originally Posted by thatguyinthesuit
hey all,
i am stuck on a math problem that no one i've talked to can figure out. the idea is i have to find a taylor series in x - a through the term (x - a)^3 for the problem:
2 - x + 3(x^2) - x^3 , a = -1
if anyone has ANY idea at all how this is done, please let me know. thank you.
put $u=x-a$, then $x=u+a$ now substutute this into your expression to get:
$
2 + (u+a) + 3(u+a)^2 -(u+a)^3
$
now expand the powers and collect terms to give a cubic in $u$ and finaly replace $u$ by $x-a$
RonL |
## Overview
Part of the semester long project is to conduct a simulation-based power-analysis. This document provides some background on power-analyses and use of simulations for sample-size and design planning. Attempts will be made to integrate additional examples in the remaining labs this semester and next as we move on to more complicated designs and statistical tests.
1. Develop a deeper understanding of the assumptions behind statistical tests
2. Sample-size planning and power-analysis
3. Understand how real data ought to behave given the assumptions you are making about the data.
## Null-hypothesis
In general, the null-hypothesis is the hypothesis that your experimental manipulation didn’t work. Or, the hypothesis of no differences.
We can simulate null-hypotheses in R for any experimental design. We do this in the following way:
1. Use R to generate sample data in each condition of a design
2. Make sure the sample data comes from the very same distribution for all conditions (ensure that there are no differences)
3. Compute a test-statistic for each simulation, save it, then repeat to create the sampling distribution of the test statistic.
4. The sampling distribution of the test-statistic is the null-distribution. We use it to set an alpha criterion, and then do hypothesis testing.
### Null for a t-test
# samples A and B come from the same normal distribution
A <- rnorm(n=10,mean=10, sd=5)
B <- rnorm(n=10,mean=10, sd=5)
# the pvalue for this one pretend simulation
t.test(A,B,var.equal=TRUE)$p.value #> [1] 0.2658035 # running the simulation # everytime we run this function we do one simulated experiment and return the p-value sim_null <- function(){ A <- rnorm(n=10,mean=10, sd=5) B <- rnorm(n=10,mean=10, sd=5) return(t.test(A,B,var.equal=TRUE)$p.value)
}
# use replicate to run the sim many times
outcomes <- replicate(1000,sim_null())
# plot the null-distribution of p-values
hist(outcomes)
# proportion of simulated experiments had a p-value less than .05
length(outcomes[outcomes<.05])/1000
#> [1] 0.058
We ran the above simulation 1000 times. By definition, we should get approximately 5% of the simulations returning a p-value less than .05. If we increase the number of simulations, then we will get a more accurate answer that converges on 5% every time.
## Alternative hypothesis
The very general alternative to the null hypothesis (no differences), is often called the alternative hypothesis: that the manipulation does cause a difference. More specifically, there are an infinite number of possible alternative hypotheses, each involving a difference of a specific size.
Consider this. How often will we find a p-value less than .05 if there was a mean difference of 5 between sample A and B. Let’s use all of the same parameters as before, except this time we sample from different distributions for A and B.
A <- rnorm(n=10,mean=10, sd=5)
B <- rnorm(n=10,mean=15, sd=5)
t.test(A,B,var.equal=TRUE)$p.value #> [1] 0.4120979 # make the mean for B 15 (5 more than A) sim_alternative <- function(){ A <- rnorm(n=10,mean=10, sd=5) B <- rnorm(n=10,mean=15, sd=5) return(t.test(A,B,var.equal=TRUE)$p.value)
}
# use replicate to run the sim many times
outcomes <- replicate(1000,sim_alternative())
# plot the distribution of p-values
hist(outcomes)
# proportion of simulated experiments had a p-value less than .05
length(outcomes[outcomes<.05])/1000
#> [1] 0.554
We programmed a mean difference of 5 between our sample for A and B, and we found p-values less than .05 a much higher proportion of the time. This is sensible, as there really was a difference between the samples (we put it there).
## Power and Effect Size
Power: the probability of rejecting a null-hypothesis, given that there is a true effect of some size.
Effect-size: In general, it’s the assumed size of the difference. In the above example, we assumed a difference of 5, so the assumed effect-size was 5.
There are many other ways to define and measure effect-size. Perhaps the most common way is Cohen’s D. Cohen’s D expresses the mean difference in terms of standard deviation units. In the above example, both distributions had a standard deviation of 5. The mean for A was 10, and the mean for B was 15. Using Cohen’s D, the effect-size was 1. This is because 15 is 1 standard deviation away from 10 (the standard deviation is also 5).
When we calculated the proportion of simulations that returned a p-value less than .05, we found the power of the design to detect an effect-size of 1.
Power depends on three major things:
1. Sample-size
2. Effect-size
3. Alpha-criterion
Power is a property of a design. The power of a design increases when sample-size increases. The power of a design increases when the actual true effect-size increases. The power of a design increases then the alpha criterion increases (e.g, going from .05 to .1, making it easier to reject the null).
## Power analysis with R
There are many packages and functions for power analysis. Power analysis is important for planning a design. For example, you can determine how many subjects you need in order to have a high probability of detecting a true effect (of a particular size) if it is really there.
### pwr package
Here is an example of using the pwr package to find the power for an independent sample t-test, with n=10, to detect an effect-size of 1. This answer is similar to our simulations answer. The simulation would converge on this answer if we increased the number of simulations.
library(pwr)
pwr.t.test(n=9,
d=.2,
sig.level=.05,
type="two.sample",
alternative="two.sided")
#>
#> Two-sample t test power calculation
#>
#> n = 9
#> d = 0.2
#> sig.level = 0.05
#> power = 0.06846584
#> alternative = two.sided
#>
#> NOTE: n is number in *each* group
## for an r test
pwr.r.test(n=10,
r=.2,
sig.level=.05,
alternative="two.sided")
#>
#> approximate correlation power calculation (arctangh transformation)
#>
#> n = 10
#> r = 0.2
#> sig.level = 0.05
#> power = 0.08574594
#> alternative = two.sided
As I mentioned there are many functions for directly computing power in R. Feel free to use them. In this class, we learn how to use simulation to conduct power analyses. This can be a redundant approach that is not necessary, given there are other functions we can use. Additionally, we will not get exact solutions (but approximate ones). Nevertheless, the existing power functions can be limited and may not apply to your design. The simulation approach can be extended to any design. Learning how to run the simulations will also improve your statistical sensibilities, and power calculations will become less of a black box.
Two more things before moving onto simulation: power-curves, and sample-size planning.
### power-curves
A design’s power is in relation to the true effect-size. The same design has different levels of power to detect different sized effects. Let’s make a power curve to see the power of a t-test for independent samples (n=10) to detect a range of effect-sizes
effect_sizes <- seq(.1,2,.1)
power <- sapply(effect_sizes,
FUN = function(x) {
pwr.t.test(n=9,
d=x,
sig.level=.05,
type="two.sample",
alternative="two.sided")$power}) plot_df <- data.frame(effect_sizes,power) library(ggplot2) ggplot(plot_df, aes(x=effect_sizes, y=power))+ geom_point()+ geom_line() ## r test power curve r_sizes <- seq(0,1,.1) power <- sapply(r_sizes, FUN = function(x) { pwr.r.test(n=9, r=x, sig.level=.05, alternative="two.sided")$power})
plot_df <- data.frame(r_sizes,power)
library(ggplot2)
ggplot(plot_df, aes(x=r_sizes,
y=power))+
geom_point()+
geom_line()
This power curve applies to all independent-sample t-tests with n=10. It is a property, or fact about those designs. Every design has it’s own power curve. The power curve shows us what should happen (on average), when the true state of the world involves effects of different sizes.
If you do not know the power-curve for your design, then you do not know how sensitive your design is for detecting effects of particular sizes. You might accidentally be using an under powered design, with only a very small chance of detecting an effect of a size you are interested in.
If you do know the power-curve for your design, you will be in a better position to plan your experiment, for example by modifying the number of subjects that you run.
### Sample-size planning
Here is one way to plan for the number of subjects that you need to find an effect of interest.
1. Establish a smallest effect-size of interest
2. Create a curve showing the power of your design as a function of number of subjects to detect the smallest effect-size of interest.
It’s not clear how you establish a smallest effect-size of interest. But let’s say you are interested in detecting an effect of at least d = .2. This means that two conditions would differ by at least a .2 standard deviation shift. If you find something smaller than that, let’s say you wouldn’t care about it because it wouldn’t be big enough for you to care. How many subjects do you need to have a high powered design, one that would almost always reject the null-hypothesis?
This is for an independent samples t-test:
num_subjects <- seq(10,1000,10)
power <- sapply(num_subjects,
FUN = function(x) {
pwr.t.test(n=x,
d=.2,
sig.level=.05,
type="two.sample",
alternative="two.sided")$power}) plot_df <- data.frame(num_subjects,power) library(ggplot2) ggplot(plot_df, aes(x=num_subjects, y=power))+ geom_point()+ geom_line() Well, it looks like you need many subjects to have high power. For example, if you want to detect the effect 95% of the time, you would need around 650 subjects. It’s worth doing this kind of analysis to see if your design checks out. You don’t want to waste your time running an experiment that is designed to fail (even when the true effect is real). ## Simulation approach to power calculations The simulation approach to power analysis involves these steps: 1. Use R to sample numbers into each condition of any design. 2. You can set the properties (e.g., n, mean, sd, kind of distribution etc.) of each sample in each condition, and mimic any type of expected pattern 3. Analyze the simulated data to obtain a p-value (use any analysis appropriate to the design) 4. Repeat many times, save the p-values 5. Compute power by determining the proportion of simulated p-values that are less than your alpha criterion. For all simulations, increasing number of simulations will improve the accuracy of your results. We will use 1000 simulations throughout. 10,000 would be better, but might take just a little bit longer. ### Simulated power for a t-test A power curve for n=10. # function to run a simulated t-test sim_power <- function(x){ A <- rnorm(n=10,mean=0, sd=1) B <- rnorm(n=10,mean=(0+x), sd=1) return(t.test(A,B,var.equal=TRUE)$p.value)
}
x <- .3
sims <- replicate(1000,sim_power(x))
length(sims[sims<.05])/length(sims)
#> [1] 0.088
# vector of effect sizes
effect_sizes <- seq(.1,2,.1)
# run simulation for each effect size 1000 times
power <- sapply(effect_sizes,
FUN = function(x) {
sims <- replicate(1000,sim_power(x))
sim_power <- length(sims[sims<.05])/length(sims)
return(sim_power)})
# combine into dataframe
plot_df <- data.frame(effect_sizes,power)
# plot the power curve
ggplot(plot_df, aes(x=effect_sizes,
y=power))+
geom_point()+
geom_line()
In this case, there is no obvious benefit to computing the power-curve by simulation. The answer we get is similar to the answer we got before using the pwr package, but our simulation answer is more noisy. Why bother the simulation?
One answer to the why bother question is that you can simulate deeper aspects of the design and get more refined answers without having to work through the math.
### Simulating cell-size
Many experimental designs involve multiple measurements, or trials, for each subject in each condition. How many trials should we require for each subject in each condition? Traditional power analysis doesn’t make it easy to answer this question. However, the power of a design will depend not only on the number subjects, but also the number trials used to estimate the mean for each subject in each condition.
Consider a simple Stroop experiment. The researcher is interested in measuring a Stroop effect of at least d=.1 (e.g., the difference between mean congruent trials is .1 standard deviations smaller than mean incongruent trials). How many subjects are required? And, how many trials should each subject perform in the congruent and incongruent conditions? Let’s use simulation to find out.
# function to run a simulated t-test
# nsubs sets number of subjects
# ntrials to change number of trials
# d sets effect size
# this is a paired sample test to model Stroop
sim_power <- function(nsubs,ntrials,d){
A <- replicate(nsubs,mean(rnorm(n=ntrials,mean=0, sd=1)))
B <- replicate(nsubs,mean(rnorm(n=ntrials,mean=d, sd=1)))
return(t.test(A,B,paired=TRUE)$p.value) } # vectors for number of subjects and trials n_subs_vector <- c(10,20,30,50) n_trials_vector <- c(10,20,30,50,100) # a loop to run all simulations power <- c() subjects <- c() trials <- c() i <- 0 # use this as a counter for indexing for(s in n_subs_vector){ for(t in n_trials_vector){ i <- i+1 sims <- replicate(1000,sim_power(s,t,.1)) power[i] <- length(sims[sims<.05])/length(sims) subjects[i] <- s trials[i] <- t } } # combine into dataframe plot_df <- data.frame(power,subjects,trials) # plot the power curve ggplot(plot_df, aes(x=subjects, y=power, group=trials, color=trials))+ geom_point()+ geom_line() # a vectorized version of the loop # run simulation for each effect size 1000 times # power <- outer(n_subs_vector, # n_trials_vector, # FUN = Vectorize(function(x,y) { # sims <- replicate(100,sim_power(x,y)) # sim_power <- length(sims[sims<.05])/length(sims) # return(sim_power) # })) To my eye, it looks like 30 subjects with 100 trials in each condition would give you a very high power to find a Stroop effect of d=.1. ## Closing thoughts The simulation approach is powerful and flexible. It can be applied whenever you can formalize your assumptions about the data. And, your simulations can be highly customized to account for all kinds of nuances, like different numbers of subjects, different distributions, assumptions about noise, etc. If you are wondering what your design can do, maybe you should simulate it. ## More example(s) As I find time I will try to add more examples here, especially out of the box examples to illustrate how simulation can be applied. ### One-Way ANOVA Let’s extend our simulation based approach to the one-way ANOVA. Let’s assume a between-subjects design, with one factor that has four levels: A, B, C, and D. There will be 20 subjects in each group. What is the power curve for this design to detect effects of various size? Immediately, the situation becomes complicated, there are numerous ways that the means for A, B, C, and D could vary. Let’s assume the simplest case, three of them are the same, and one of them is different by some amount of standard deviations. We will compute the main effect, and report the proportion of significant experiments as we increase the effect size for the fourth group. # function to run a simulated t-test sim_power_anova <- function(x){ A <- rnorm(n=20,mean=0, sd=1) B <- rnorm(n=20,mean=0, sd=1) C <- rnorm(n=20,mean=0, sd=1) D <- rnorm(n=20,mean=(0+x), sd=1) df <- data.frame(condition = as.factor(rep(c("A","B","C","D"),each=20)), DV = c(A,B,C,D)) aov_results <- summary(aov(DV~condition,df)) #return the pvalue return(aov_results[[1]]$Pr(>F)[1])
}
# vector of effect sizes
effect_sizes <- seq(.1,2,.1)
# run simulation for each effect size 1000 times
power <- sapply(effect_sizes,
FUN = function(x) {
sims <- replicate(1000,sim_power_anova(x))
sim_power <- length(sims[sims<.05])/length(sims)
return(sim_power)})
# combine into dataframe
plot_df <- data.frame(effect_sizes,power)
# plot the power curve
ggplot(plot_df, aes(x=effect_sizes,
y=power))+
geom_point()+
geom_line()
It looks like this design (1 factor, between-subjects, 20 subjects per group) has high power to detect an effect of d=1, specifically when one of the groups differs from the others by d=1.
However, most effects in psychology are smalls, d=.2 is very common. How, many subjects does this design require to have high power (let’s say above .95, although most people use .8) to detect that small effect?
sim_power_anova <- function(x){
A <- rnorm(n=x,mean=0, sd=1)
B <- rnorm(n=x,mean=0, sd=1)
C <- rnorm(n=x,mean=0, sd=1)
D <- rnorm(n=x,mean=.2, sd=1)
df <- data.frame(condition = as.factor(rep(c("A","B","C","D"),each=x)),
DV = c(A,B,C,D))
aov_results <- summary(aov(DV~condition,df))
#return the pvalue
return(aov_results[[1]]\$Pr(>F)[1])
}
# vector of effect sizes
subjects <- seq(10,1000,50)
# run simulation for each effect size 1000 times
power <- sapply(subjects,
FUN = function(x) {
sims <- replicate(1000,sim_power_anova(x))
sim_power <- length(sims[sims<.05])/length(sims)
return(sim_power)})
# combine into dataframe
plot_df <- data.frame(subjects,power)
# plot the power curve
ggplot(plot_df, aes(x=subjects,
y=power))+
geom_point()+
geom_line()
The simulations suggests we need about 560 subjects in each group to have power .95 to detect the effect (d=.2). That’s a total of 2240 subjects. Reality can be surprising when it comes to power analysis. It is better to be surprised about your design before you run your experiment, not after.
### Correlation between traits and behavior
Details:
Let’s assume that each question involves a likert scale from 1 to 7. Each person randomly picks a number from 1 to 7 for each question. Let’s assume performance on the behavioral task is sampled from a normal distribution with mean = 0, and sd = 1.
# get 20 random answers for all 20 subjects and 20 questions
# columns will be individual subjects 1 to 20
# rows will be questions 1 to 20
questionnaire <- matrix(sample(1:7,20*20, replace=TRUE),ncol=20)
# get 20 measures of performance on behavioral task
# correlate behavior with each question
save_correlations <-c()
for(i in 1:20){
}
# show histogram of 20 correlations
hist(save_correlations)
The histogram shows that a range of correlations between individual questions and behavior can emerge just by chance alone. If you run the above code a few times, you will see that the histogram changes a bit because of random chance.
Oftentimes researchers might not know which question on the questionnaire is the best question. That is, the one that best correlates with behavior. Consider a researcher who computes all of the correlations between each question and behavior, and then chooses the question with highest correlation (positive or negative) as the best question. After all, it has the highest correlation. After choosing this question, they might suggest that behavior is strongly correlated with how people answer this question.
Let’s try to find out by simulation what kinds of large correlations can occur just by chance alone. We will run the above many times, and each time we will save the absolute value of the largest correlation between a question and behavior. Just how large can these correlations be by chance alone?
save_max <- c()
for( j in 1:10000){
questionnaire <- matrix(sample(1:7,20*20, replace=TRUE),ncol=20)
save_correlations <-c()
for(i in 1:20){
}
save_max[j] <- max(abs(save_correlations))
}
hist(save_max)
The simulation shows that chance alone in this situation can produce very large correlations, as large as .8 or .9 (although not very often).
The situation changes somewhat if many more subjects are run. Let’s do the same as above, but run 200 subjects, rather than 20.
save_max <- c()
for( j in 1:10000){
questionnaire <- matrix(sample(1:7,200*20, replace=TRUE),ncol=200)
hist(save_max) |
• Views 2,490
• Citations 3
• ePub 25
• PDF 742
`Journal of Theoretical ChemistryVolume 2013 (2013), Article ID 720794, 14 pageshttp://dx.doi.org/10.1155/2013/720794`
Research Article
## Structures and Stabilities of Alkaline Earth Metal Oxide Nanoclusters: A DFT Study
Department of Chemistry, University of Delhi, Delhi-110 007, India
Received 29 March 2013; Accepted 28 July 2013
Academic Editors: F. Aquilante, M. Koyama, and T. Takayanagi
Copyright © 2013 Prinka Batra et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
#### Abstract
The stability orders of a number of alkaline earth oxide cluster isomers , M = Mg, Ca, Sr, Ba and have been determined by means of density functional theory studies using the LDA-PWC functional. Among the candidate structures, the hexagonal-ring-based isomers and the slab shapes are found to display similar stabilities. Stacks of hexagonal (MO)3 rings are found to be the slightly preferred growth strategy among the (MgO)6, isomers. In contrast, the slab structures are slightly preferred for the other alkaline metal oxide (MO)6 clusters. An explanation based on packing and aromaticity arguments has been proposed. This study may have important implications for modeling and understanding the initial growth patterns of small nanostructures of alkaline earth metals.
#### 1. Introduction
In the last few years, considerable effort has been directed to the understanding of metallic and semiconductor clusters. Clusters are aggregates of atoms or molecules intermediate in size between individual atoms and bulk matter, and their studies provide an interesting way to develop materials with varying properties by changing size and shape. Hence, studies of cluster properties as a function of size have received prominence in recent years. While much progress has been made on clusters of metals and semiconductors, metal oxide particles are often considered to be bulk fragments. However, their structure and properties could be entirely different in small clusters [14].
In this work, we have performed a comparative study of the structures, stabilities, and properties of some alkaline earth metal oxides (, , , and ). Magnesium oxide crystallizes in the rock-salt structure and has some typical semiconducting properties, such as wide valence band (~6 eV), large dielectric constant (9.8), and small exciton binding energy (<0.1 eV). For bulk MgO, the experimental value of the band gap is 7.8 eV [5]. It is close to an ideal insulating ionic solid with a valence band structure dominated by the strong potential of the ionic cores. Studies of the electronic properties of MgO are motivated by its technological applications, such as in catalysis, microelectronics, and electrochemistry. Bulk MgO is relatively inert, but its reactivity is greatly enhanced in the nanoscale. The high surface area and the intrinsically high surface reactivity of MgO nanocrystals make these materials especially effective as adsorbents [6]. In fact, they have been called “destructive adsorbents” because of their tendency to adsorb and simultaneously destroy by bond breaking processes a series of toxic chemicals [69].
It is interesting to study a similar system like calcium oxide in order to assess whether those trends are a general feature of alkaline earth oxide clusters or not. From the theoretical point of view, Ca2+ is larger than Mg2+, so we can expect ionic size effects to play an important role in determining structural differences. Besides, Ca2+ is approximately six times as polarizable as Mg2+, and the polarizabilities of the oxide anions are also larger in CaO because the bonding is weaker than in MgO. Calcium oxide also crystallizes in the close-packed “rock-salt” structure and is primarily an ionic material, with some degree of covalency in its bonding. It is considered as a prototype oxide from the theoretical point of view, with a wide band gap (7.1 eV) [10] and a high dielectric constant (11.8). Furthermore, local density approximation band structure calculations predicted a half-metallic ferromagnetic ground state for CaO [11]. Nanocrystalline CaO is used as an absorbent to remove COD from paper mill effluent [12].
Barium oxide is an oxide with interesting electronic and structural properties. It is also a precursor to the ferroelectric perovskite oxide BaTiO3 and a component of the earth’s mantle. Barium strontium oxide coated carbon nanotubes serve as field emitters [13].
Theoretical work on ionic materials has been centered mostly in the family of alkali metal halides, and studies of metal oxide clusters have been comparatively scarce, despite their importance in many branches of surface physics, such as heterogeneous catalysis or corrosion. The mass spectra and collision induced fragmentation data for stoichiometric and cluster ions have been reported [14, 15]. The mass spectra of cluster ions [16] and experimental measurements of several singly and doubly ionized cluster ions of MgO and CaO by laser ionization time-of-flight mass spectrometry [1720] have also been published. Simple ionic models based on phenomenological pair potentials have been used to explain the global trends found in these experiments [1720]. Several ab initio calculations on stoichiometric MgO clusters have been presented [2132], but the growth of these clusters is still not well understood.
We aim to study the electronic properties of the clusters of these alkaline earth metal oxides using the density functional approach.
#### 2. Computational Details
In the calculations reported in the paper, first-principles density functional (DF) calculations were performed using the DMol [5] code [3336], available from Accelrys Inc. in the Materials Studio 3.2 package. The DFT calculations were carried out employing both the generalized gradient approximation (GGA) with the PW91 functional [37], as well as the local density approximation (LDA) with the PWC (Perdew-Wang local correlation) functional [38]. Hybrid functionals such as B3LYP, though more accurate for metal oxide dissociation energies, cannot be efficiently used with plane waves and are hence not useful for calculations on solid materials [39]. Our calculations employed numerical basis sets of double-ζ quality plus polarization functions (DNP) to describe the valence orbitals. This basis set is the numerical equivalent of the Gaussian basis, 6-31G**. The cores of Ba2+ and Sr2+ were treated with the all-electron approach.
Complete geometry optimizations for all structures were carried out. The atomic positions were relaxed to achieve minimum energy, until the system energy converged to 2 × 10−5 Ha and the gradient to 0.004 Ha Å−1. The SCF tolerance was set at 1 × 10−5 and the maximum displacement at 0.005 Å. The binding energies (BEs), the highest occupied molecular orbital (HOMO)-lowest unoccupied molecular orbital (LUMO) gaps, Fermi energies, and density of states were also computed. The reported binding energy values were corrected for zero-point vibrational energies.
#### 3. Results
Various structures, including the slab, hexagonal, octagonal, ladder, and other types, were studied for various numbers of formula units of the four alkaline earth metal oxides. Various theoretical studies at different levels of calculations have been reported in the literature [1, 2, 29, 32, 4042], but there is no clear consensus regarding the suitability of LDA, GGA, or hybrid functionals for calculations on metal oxide nanoclusters. We therefore first compared results for the MgO molecular form obtained by different methods with the experimental quantities. The calculated LDA-PWC, GGA-PW91, B3LYP [40, 41], and MP4 [40, 41] values for the binding energy are 3.69, 3.22, 2.03, and 3.22 eV, respectively. The latter three fall short of the experimental value [43] of 3.57 eV. The LDA result is far superior to the other calculations, although its tendency to overbind is clear from the result. Similarly, the computed Mg–O bond lengths are 1.743, 1.767, and 1.756 Å, respectively, for LDA-PWC, GGA-PW91, and B3LYP [40, 41] calculations in comparison with the experimental [30, 31, 43] value of 1.749 Å. Here, again, our LDA result shows the best correspondence with experiment. The calculated vibrational frequencies are 751 cm−1 (LDA) and 719 cm−1 (GGA), in comparison with the observed [43] value of 785 cm−1.
For CaO, too, the LDA-PWC calculated binding energy (5.08 eV) is in better agreement with the experimental [43] value of 4.76 eV than the B3LYP/6-311G(2d) [40, 41] value (4.28 eV). Though the GGA-PW91 value (4.55 eV) is in slightly better agreement with experiment, the LDA Ca–O bond distance (1.818 Å) is in excellent agreement with the experimental [43] value (1.822 Å), while the GGA value is considerably larger (1.843 Å), reflecting the tendency of GGA to underbind. In the rest of the paper, therefore, we report the LDA results, but we also offer comparison with our calculated GGA results.
##### 3.1. Stabilities of Structures
(MO)2 Where M = Mg, Ca, Sr, Ba. The optimized structures are rhombus shaped and planar (Figure 1). The angle about the magnesiums is obtuse (95.7°) in (MgO)2, indicating the overlap repulsion between the large oxygen ions in close proximity since the cation size is small. However, for CaO, the bond angles are acute (∠OCaO = 86.4°). The bond angle about the metal ion decreases with increasing atomic number of the metal ion (∠OSrO = 82.7°, ∠OBaO = 79.3°), in parallel with the increasing ionic radii of the metal ions. The metal radii for Mg, Ca, Sr, and Ba are 1.60, 1.97, 2.15, and 2.17 Å, respectively, whereas the radii for the corresponding M2+ cations are 0.65, 0.97, 1.15, and 1.35 Å [44]. The atomic and ionic radii of O and lattice O2− are 0.66 and 1.40 Å, respectively. The optimized M–O bond distances are 1.858, 2.005, 2.140, and 2.277 Å, respectively, for M = Mg, Ca, Sr, and Ba, which are all smaller than the sum of the ionic radii of M2+ and O2−, indicating that these bonds are not purely ionic. In contrast, the observed M–O bond distances in the ionic crystals of the metal oxides are 2.106, 2.405, 2.565, and 2.762 Å, respectively. The corresponding gas phase values [43] for the diatomic species are 1.749, 1.822, 1.920, and 1.940 Å, respectively. The optimized M–O bond distances are slightly closer to the gas phase values than to the ionic values. The calculated Mulliken charges on the metal ions are 0.930, 1.253, 0.959, and 0.943, respectively. In view of the fact that the metal and oxygen charges are close to +1 and −1, respectively, the actual ionic radii are expected to lie between those for the neutral state and the divalent ions. The values of the partial charges indicate the slightly higher ionic character of (CaO)2 compared to the other (MO)2 systems.
Figure 1: Optimized structures from LDA and GGA calculations, along with the initial geometries for the (MO)2 systems.
Figure 1 gives the optimized structures for all the (MO)2 moieties. In this figure, as well as all other figures, the metal ions are represented by green balls and the oxide ions by red balls.
(MO)3 Where M = Mg, Ca, Sr, Ba. Table 1 summarizes the results for the various (MO)3 systems studied here. Here, MO stands for the four alkali metal oxides, MgO, CaO, BaO, and SrO. For , the two possible structures, namely, ladder and hexagonal, were studied for all the metal oxides.
Table 1: Calculated binding energies, HOMO-LUMO gaps, and Fermi energies (in eV) for (MO)3 clusters.
(MgO)3. Both ladder and hexagonal ring starting structures optimized to a distorted hexagonal form with a binding energy of −23.33 eV (see Table 1 and Figure 2(a)). It is well known that (MgO)3 ring structures are competitive building blocks in the growth of very small MgO clusters [29]. This result is also in agreement with our earlier calculations [22, 23] for (MgO)12 clusters, which optimized to stacked rings from initial cubic rock salt structures.
Figure 2: (a) Optimized structures from LDA and GGA calculations, along with the initial geometries for the (MgO)3 system. (b) Optimized structures from LDA and GGA calculations, along with the initial geometries for the (CaO)3 system. (c) Optimized structures from LDA and GGA calculations, along with the initial geometries for the (SrO)3 system.
In this ring structure (Figure 2(a)), the optimized (LDA-PWC) bond angle about each oxygen is 103.9°, and that about each Mg is 136.1°. The Mg–O bond distance is 1.819 Å. The increase of the ∠OMgO signifies repulsion between the oxygens. In fact, the O–O distances are 3.374 Å, compared to 2.866 Å for the Mg–Mg distance. The Mg–O bond orders are 0.872, implying significant covalent character. The Mulliken charge on each oxygen is −0.915. This signifies increased size of the oxygen ion in the system relative to the oxygen atom. Therefore, the energy is lowered by keeping the oxide ions away from each other. The GGA-PW91 values are similar, although the computed Mg–O bond distance is much larger (1.843 Å), reflecting the tendency of GGA to underbind atoms. This structure has a high band gap (3.05 eV), which is much lower than that reported for bulk MgO [4547].
(CaO)3. For (CaO)3, however, it was found that both initial structures optimized to different geometries. LDA calculations indicate that the ladder structure is slightly preferred, while GGA calculations predict a slight tilt in favor of the hexagonal structure (~0.1 eV). However, the energy differences are too small for one to make a definite statement regarding the relative stabilities. While the band gap for the hexagonal structure is 2.3 eV, that of the ladder structure is much smaller (1.8 eV).
In the ladder structure, there are two types of atoms—the central ones having a coordination number of 3, while the outer atoms having a coordination of 2 only and are more unsaturated. As a result, the Mulliken charge densities on the outer Ca atoms are 1.249, compared to 1.292 for the central Ca atom. Likewise, the terminal two-coordinate oxygens have a smaller negative charge (−1.240), while the central one has a partial charge of −1.311. The central bond length is also longer (2.294 Å) compared to the outer ones (1.952 Å). The increase in the central bond length can be understood in terms of the increased coordination of the central ions. The external field produced by the larger number of surrounding ions increases the ionic character of the central Ca–O bond, which resembles the Ca–O lattice limit (2.405 Å), while the terminal atoms are closer to the molecular limit (1.822 Å). The bond orders are 0.837 and 0.610, respectively. The Ca–O bond lengths and bond orders are 2.059 Å and 0.566, respectively, in the Ca–O–Ca face, and 2.108 Å and 0.662 in the O–Ca–O face. Again, this difference is due to the substantial ionic radius of O2−, and the O–Ca–O face has two of these ions, compared to only one in the opposite face.
Figure 2(b) depicts the optimized structures. It can be seen that the bond angles about the oxygens in the ladder structure are obtuse, while those about the calciums are acute.
(SrO)3. For (SrO)3, the ladder structure is preferred by 0.16 eV. In this structure (Figure 2(c)), the outer Sr–O–Sr angle is obtuse (101.0°), but the O–Sr–O bond angle is acute (86.9°). The shortest Sr–O bond distances are the outer ones, that is, between two 2-coordinate sites (2.107 Å), and the corresponding bond order is 1.128, while the longest bond is the central bond (2.450 Å) with a bond order of only 0.397. The Mulliken charges on the atoms show a behavior similar to that observed for (CaO)3; that is, the charges on , , and are 0.915, 1.055, −0.952 and −0.981, respectively.
(BaO)3. For (BaO)3, again, two different structures are obtained. The ladder structure is slightly preferred (by 0.12 eV; see Table 1). The Mulliken charges on the atoms show a behavior similar to that observed for (CaO)3 and (SrO)3; that is, the charges on , , , and are 0.880, 1.035, −0.915, and −0.965, respectively. It is observed that the charge on the central (3-coordinate) metal ion decreases with increasing atomic number of the metal ion (−1.311, −0.981, and −0.965, for ladder (CaO)3, (SrO)3, and (BaO)3, resp.), indicating the increasing involvement of the metal ion d orbitals and hence increasing covalency. As for SrO, the inner Ba–O bonds are longer (2.512 Å), and have a bond order of only 0.436, but the outer Ba–O bonds are shorter (2.179 Å), and the corresponding bond order is 1.160.
(MO)4 Where M = Mg, Ca, Sr, Ba. In this case, three structures, namely, slab, octagonal, and ladder, were studied. Table 2 gives the calculated energies and the HOMO-LUMO gaps for the various structures. It is found that in all cases, the slab form is preferred over the other two. The energy differences are higher for this case, and the slab form is more emphatically preferred. Comparison of the calculated results is also made with B3LYP calculations [40, 41].
Table 2: Binding energies, HOMO-LUMO gaps, and Fermi energies (in eV) for (MO)4 clusters.
(MgO)4. In the slab structure, all atoms are equivalent. We find that, from (MgO)4 onward, three-dimensional structures are favored. (MgO)4 has a cubic structure with rhombohedral distortion (Figure 3(a)), each atom being tricoordinated. The Mulliken charge on Mg is 0.930, and all the Mg–O bond orders are 0.570. The bonding is therefore primarily ionic.
Figure 3: (a) Optimized structures from LDA and GGA calculations, along with the initial geometries for the (MgO)4 system. (b) Optimized structures from LDA and GGA calculations, along with the initial geometries for the (CaO)4 system. (c) Optimized structures from LDA and GGA calculations, along with the initial geometries for the (SrO)4 system. The (BaO)4 system is similar.
(CaO)4. For CaO, the slab structure is again preferred. Although both LDA and GGA indicate this preference, it is interesting to see that the optimized structures for the octagonal initial structure are different for the two cases (Figure 3(b)) and different from the initial ring structure, unlike the case of (MgO)4 (Figure 3(a)). In the LDA case, the optimized structure consists of three fused rhombi, while the GGA optimized structure consists of fused six- and four-membered rings.
(SrO)4 and (BaO)4. (SrO)4 and (BaO)4 show behavior similar to (CaO)4, except that both LDA and GGA give similar optimized structures for the octagonal form. For this reason, only the optimized structures for (SrO)4 are shown in Figure 3(c).
An interesting result is that, although the slab structure is preferred in all cases, the next important structure is the ring for (MgO)4, but for the other metal oxides, it is the ladder structure. Unlike (MgO)4, the initial octagonal structure undergoes considerable distortion in all other cases.
(MO)5 Where M = Mg, Ca, Sr, Ba. The largest number of structures is possible in this case, namely, ladder, hexagonal, decagonal, chair, and many others (Table 3). In the case of MgO, both LDA and GGA predict that the most stable structure is the chair form. Similar is the case for CaO, that is, the chair form is the most stable structure, but in the case of SrO and BaO, the ladder form is found to be slightly preferred over the other forms. However, it may be noted that the energy differences are not very large for this stoichiometry.
Table 3: Binding energies, Fermi energies, and HOMO-LUMO gaps (in eV) for (MO)5 clusters.
(MgO)5. Although MgO seems to prefer hexagonal structures, interestingly, in this case, the hexagonal fused ring optimizes to the ladder structure (Figure 4(a)). Similarly, the MgO-I and MgO-II starting structures optimize to the same geometry, which is a hybrid of one hexagonal ring fused with two four-membered rings. The lowest energy structure for (MgO)5 is obtained from (MgO)4 by capping an edge such that the capping atoms are bicoordinated, while the rest of the atoms are tricoordinated. The corresponding edge on (MgO)4 opens up because of the increase in coordination number to three and consequent increase in ionic character. Similar is the case for the (CaO)5 structures, except that LDA predicts two distinct structures, CaO-I and CaO-II (Figure 4(b)). For (SrO)5 (Figure 4(c)) and (BaO)5 (Figure 4(d)), the ladder structures are preferred, and the chair structures could not be optimized.
Figure 4: (a) Optimized structures from LDA and GGA calculations, along with the initial geometries for the (MgO)5 system. (b) Optimized structures from LDA and GGA calculations, along with the initial geometries for the (CaO)5 system. (c) Optimized structures from LDA and GGA calculations, along with the initial geometries for the (SrO)5 system. (d) Optimized structures from LDA and GGA calculations, along with the initial geometries for the (BaO)5 system. Dashes represent structures that did not optimize.
(MO)6 Where M = Mg, Ca, Sr, Ba. In this case, three structures, namely, slab, hexagonal, and ladder, were studied (Table 4).
Table 4: Binding energies, HOMO-LUMO gaps, and Fermi energies (in eV) for (MO)6 clusters.
Table 4 indicates that in the case of MgO, the stacked hexagonal form is the most stable, and the slab structure is higher in energy by 0.21 eV. This energy difference increases to 0.57 eV for the (MgO)12 cluster [22, 23]. For the other metal oxides, the slab structure is preferred. The (MO)6 system is the first system for which both the slab structure and the hexagonal structure are possible, and we can make a comparison of the two. We observe that the Mg–O bond distances in the terminal rings of (MgO)6 are reduced when going from the rhombic (slab) to the hexagonal structure from 1.919 Å in the former to 1.891 Å in the latter. As for the ladder structures, the bond distance in the inner ring of the three-ring slab cluster (2.123 Å) is markedly elongated as compared to the terminal bond distance. As expected, the outer Mg–O distance is closer to the molecular 1.822 Å, while the inner ring distance is closer to the 2.106 Å lattice limit. Two distinctly different interplanar distances (1.936 Å and 1.898 Å) are also observed, depending on which atom, Mg or O, sits on the terminal ring of the three-ring stack. It is larger when O is on the terminal ring because of the larger O2− radius. The interplanar distance in the hexagonal stacked structure (1.980 Å) is much larger than the Mg–O bond distance. The increased charge separation in the interior has only a minor effect on the terminal rings beyond that already seen for the double-ring (MgO)4 system (1.943 Å).
For CaO, we do not observe any Ca–O bond compression with increasing number of atoms in the terminal rings; that is, the bond lengths do not vary too much when going from the slab (2.108 Å) to the hexagonal ring (2.106 Å) structure. There is, however, an interior ring expansion (2.290 Å) in the slab structure, similar to the MgO system, due to increased polarization of the Ca–O bond under the influence of the terminal rings. The terminal Ca–O bond distance in the three-ring stack is observed to be the mean of the 2.405 Å lattice value and the molecular 1.822 Å distance [43]. The inner-ring Ca–O bond distance (2.288 Å) is considerably closer to the bulk limit.
This behavior continues down the series. For (SrO)6, the optimized interior Sr–O bond distance (2.422 Å) is similar to the lattice value (2.565 Å), while the terminal ring distance (2.245 Å) is smaller and closer to the gas phase value (1.920 Å). For (BaO)6, the inner and outer Ba–O bond distances are 2.564 Å and 2.379 Å, respectively, compared to the gas phase and bulk values of 1.940 Å and 2.762 Å, respectively. An interesting trend is also observed. As the atomic number of the metal ion increases, the interplanar distance in the stacked hexagonal ring approaches the M–O distance in the ring, suggesting that spherical clusters become more important for the heavier alkaline earth metal oxides. For example, for (MgO)6, the ring and interplanar distances are 1.891 and 1.980 Å, respectively. The corresponding distances are 2.118 Å and 2.159 Å (CaO), 2.272 Å and 2.290 Å (SrO), and 2.422 Å and 2.431 Å (BaO).
The inner ring bond distance is also only slightly larger than the bulk value for the (MgO)6 slab, and the deviation from the bulk value increases with increasing atomic number of the metal, suggesting a faster overall convergence to bulk properties for MgO clusters than for other alkaline metal oxide clusters.
Figure 5 depicts the optimized structures for the (MO)6 clusters for different starting geometries.
Figure 5: (a) Optimized structures from LDA and GGA calculations, along with the initial geometries for the (MgO)6 system. (b) Optimized structures from LDA and GGA calculations, along with the initial geometries for the (CaO)6 system. (c) Optimized structures from LDA and GGA calculations, along with the initial geometries for the (SrO)6 system. (d) Optimized structures from LDA and GGA calculations, along with the initial geometries for the (BaO)6 system.
##### 3.2. Vibrational Frequencies
For small MgO clusters, the experimental [48] vibrational frequency at 640 cm−1 matches with a strong resonance observed at 651 cm−1 in the high-resolution electron energy loss spectroscopy (HREELS) surface phonon spectrum of the solid surface [49, 50]. The present calculations indicate that the vibrational frequency for this optically allowed stretching motion of oxygen atoms perpendicular to the surface of the (MgO)6 hexagonal cluster is at 691 cm−1 (intensity = 512 km mol−1), while for the slab structure, this band is at 740 cm−1 (intensity = 445 km mol−1). For the (MgO)12 nanocluster, the corresponding values [22, 23] are 657 cm−1 (intensity = 1132 km mol−1) (hexagonal) and 677 cm−1 (intensity = 705 km mol−1) (slab).
##### 3.3. Electronic Structures
As noted previously, the preferred geometry for the (MgO)6 cluster is tubular, which becomes more stable than the rectangular bulk-like cluster due to stabilization of the occupied levels. The isosurfaces of the HOMO and LUMO reveal that these comprise mainly the 2p orbitals of oxygen and 3s and 3p orbitals of Mg, respectively, but the LUMO also has an oxygen 3s component (see Figure 6). Thus, the anion-centered nature of the HOMO indicates that its energy depends strongly on the O–O distances. In the hexagonal structure, the O–O distance is 3.45 Å, compared to 2.86 Å for the rectangular cluster. The smaller anion-anion repulsion in the nanotube stabilizes the HOMO, increasing the HOMO-LUMO gap.
Figure 6: Isosurfaces (isovalue = 0.03 Ha) of the HOMOs and LUMOs of the (MgO)6 hexagonal and cubic clusters.
The electronic density of states (DOS) near the Fermi level for the two structures is shown in Figure 7. The separation of bands for the hexagonal structure is about 3.3 eV, which is higher than that obtained for the slab cluster (2.9 eV), but considerably smaller than the value of 4.8 eV computed for bulk MgO.
Figure 7: Calculated partial densities of states for the (a) hexagonal and (b) slab (MgO)6.
The DOS plots for the nanotube and cube-like structure are qualitatively similar, but one important difference is noticeable. In the hexagonal structure, there is greater involvement of the Mg2+ ion 3d orbitals near the Fermi level, although the population analysis reveals that the Mg electron configurations in the two are similar (3s0.342p63p0.463d0.26 for the hexagonal structure and 3s0.362p63p0.453d0.26 for the terminal slab ions). The peak height for the slab structure is also smaller. The computed electron configuration of the magnesium ion differs from the expected 2p6. The 3s, 3p, and 3d orbitals are occupied, leading to a formal charge on Mg closer to one than the expected two.
The densities of states were also calculated for the slab structures of the other (MO)6 systems. These are shown in Figure 8.
Figure 8: Calculated partial densities of states for the (a) (CaO)6, (b) (SrO)6, and (c) (BaO)6 slabs.
It can be seen from Figure 8 that the involvement of d orbitals increases with increasing atomic number of the metal ion. Indeed, the participation of d orbitals increases from 0.263 in the terminal atoms of the MgO slab to 0.406 in CaO, 0.715 in SrO, and 0.835 for BaO. This happens at the expense of the p orbital population, so that the overall charge on the metal ion remains close to +1. For the inner metal ions, the participation of d orbitals is a little smaller. The additional electron comes from a 2p orbital of oxygen, and the oxygen atoms have electron configurations close to 1s22s22p5 in all cases. Another noticeable feature is that, while the bands are sharp for the other (MO)6 systems, they are broad for (CaO)6. In fact, CaO appears to exhibit some anomalous behavior, as the Mulliken charge on Ca is slightly higher than that calculated for the metal ions in the other metal oxides.
#### 4. Discussion
The bond lengths and total energy per molecule for the lowest energy structures are given in Table 5. The binding energy for a single MgO molecule is quite low in comparison to other clusters. The bond length (1.743 Å) is, however, small. As the cluster size increases, the bond lengths and binding energies increase in an oscillatory manner. The bond length is elongated to 1.882 Å in the most favorable structure of (MgO)2, which is a rhombus. The binding energy per molecule also increases significantly, though this value is still much smaller than that for larger clusters. The lowest energy structure for (MgO)3 is planar and ring type, with each atom being bicoordinated. The Mg–O bond length in this case is shorter compared with the (MgO)2 cluster, and there is a significant increase in the binding energy. The bond angles about oxygen ions are smaller than those about magnesium ions, but the MgO bond lengths remain the same within both planar structures. In some of the structures, notably the chair structures, a large variation in M–O distances within the cluster is discernible from Table 5.
Table 5: Bond lengths () and binding energies (eV) per MgO unit for different stoichiometries of (MO).
The increased stability of slab structures (both and ) is obvious from the binding energies per molecule for all systems (except , for which the hexagonal ring-based structures turn out to be more stable), in agreement with previous studies [29, 32, 40, 41]. For , our results that ring structures lie higher in energy are in contrast with reported results [1720], in which and 5 were reported to have ring structures. For all (MO)4, the slab structures are preferred over the octagonal ring and ladder structures. However, while for all the other (MO)4 system, the ring structure is the least preferred, for (MgO)4 it is preferred over the ladder structure. It may be noted that the ring structure undergoes considerable distortion in all cases except for MgO.
An important observation from the optimized three-dimensional isomers is the chair-type structures of clusters. This is very interesting, as it indicates the existence of some covalent bonding in MgO clusters similar to covalently bonded silicon, although MgO is considered to be ionic in the bulk. Covalent bonding in alkaline-earth oxides increases [51] as one goes from MgO to CaO, SrO, and BaO due to increasing involvement of d orbitals in bonding (Figure 8). While in bulk this is negligible for MgO, a reduction in the bond lengths in clusters could be responsible for the increased covalent character.
In the case of , there is a consistent preference for slab-shaped structures, but the stability difference between the most stable and the second most stable structure is always small. Thus, hexagonal rings are slightly more stable than the slab-shaped structures in the case of , whereas the opposite is true for . Consequently, we consider whether this possible trend continues, that is, toward increased relative stability of the slab structures relative to the hexagonal. Whereas the overall patterns of the Sr and Ba compounds are similar to Ca, we note in the case of (BaO)6 the lesser difference in stability between the slab and hexagonal ring-based isomers. Based on these observations, it can be concluded that there is no definite trend towards increasing preference for the slab shape with increasing atomic number on the metal for the small alkaline earth oxide clusters.
The similar values of the calculated Mulliken charges, approximately +1 on the metal ions in the various metal oxide clusters, make it difficult to assign an electronic cause for the slight preference for hexagonal structures in the case of MgO and slab structures for the other metal oxides. This leads us to believe that it is a packing effect rather than an electronic one. As stated in the sections above, due to the small cation size in MgO, the Mg–O bond is short, and, consequently, the four-membered ring in the slab structure is too strained. In order to accommodate the small cation and the large anion in the four-membered ring, the Mg–O bond length increases, leading to a weakening of the bonding and consequent instability. On the other hand, the octagonal ring structure is too open and is hence not favored for any of the metal oxides. This leaves the six-membered ring as a compromise for MgO systems.
An alternate explanation for the preference for the six-membered ring structure in the case of MgO could be the existence of aromaticity. In order to quantify aromaticity, we used the nucleus-independent chemical shift (NICS) method proposed by Schleyer et al. [52]. The NICS values were calculated at the center of the six-membered ring (NICS(0)) and at a plane 1 Å above it (NICS(1)) and compared with the corresponding values calculated for benzene. In this method, negative NICS values indicate aromaticity and positive values antiaromaticity. We had earlier [53, 54] concluded that the NICS(1) value is the best measure of aromaticity for benzene. The calculated NICS(1) value for benzene is −10.84, whereas the corresponding values for (MgO)3 and (CaO)3 ring structures are −2.05 and 2.75, respectively, clearly indicating that the MgO ring is about 20% aromatic, while the CaO ring is antiaromatic, accounting for the increased stability of the MgO ring.
#### 5. Conclusions
An important finding of the present study is that hexagonal tube-like structures are preferred for clusters, while slab-like structures are preferred for the other alkaline earth metal oxide clusters. Explanations based on ionic size effects and aromaticity have been proposed in this work. It is gratifying to note that experimental observations of mass spectra [14, 15, 48, 55] indicate the existence and stabilities of such stacked hexagonal rings, at least for small gas-phase clusters of MgO. Other experimental and theoretical [22, 23, 56] studies also provide evidence for the existence and stabilities of MgO nanotubes.
An outstanding result of the present study is the similar stabilities of the hexagonal-ring-based structures and the rock-salt-like slab-shaped isomers. While this observation is important as such and contradicts the exclusive nature of the latter structural shapes proposed previously [1720], it is noteworthy how the stability ordering changes as the metal atomic number increases among the alkaline earth elements. In the case of , the hexagonal-ring-based structure is the more stable one, although the energy difference between the two structures is small. Going to , the situation is reversed in that the slab structure prevails. For , the trend towards increasing stabilization of the slab structures continues, and the slab structure is evidently the more stable one. For , the slab structure is still the preferred one, but to a lesser extent. An explanation based on simple packing arguments has been proposed to explain the variation in relative stabilities. Aromaticity in the (MgO)3 ring also accounts for its stability.
It is difficult to find experimental verification for our results, as neutral clusters are difficult to study experimentally. Their structures are usually inferred indirectly from the mass spectra of ionized clusters, the more abundant species being interpreted as the more stable. However, the results from such studies on alkaline earth metal oxides are contradictory and depend on the process of formation of the clusters. Two conclusions, however, result from these studies. Firstly, for small clusters, hexagonal stacked rings are preferred for , but these give way to rock-salt cubic structures for large values of . Secondly, fragmentation clusters are found for both even values of and when is a multiple of 3. These results suggest that the basic cluster-building blocks are different for the two materials, as observed from the present calculations. The results of the experimental studies and our calculations can be reconciled if we assume that the neutral stoichiometric and clusters show structural differences: the basic building block is an (MgO)3 hexagonal fragment in the case of MgO and a (CaO)3 rectangular 2 × 3 (or double-chain) fragment for CaO, as the one found in the present studies. This difference is just a packing effect due to the larger overlap repulsion between anions in MgO since the cation size is very small and is also due to the aromaticity of the (MgO)3 ring.
These experiments also suggest that the hexagonal ring and rectangular slab structures are topologically equivalent. Deformation along one of the directions orthogonal to the rings stack transforms it into the slab structure. As noted earlier, this intense vibration mode for (MgO)6 occurs at a low wavenumber (691 cm−1). Thus, experimental knowledge of abundance of masses alone cannot distinguish between the two structures, and sophisticated calculations such as the present ones can only decide the relative stabilities. Our earlier studies [22, 23] on the (MgO)12 cluster had indicated that the (MgO)12 nanotube, consisting of four stacked hexagonal (MgO)3 rings, is more stable than the bulk-like cubic structure by 0.48 eV. Moreover, the calculated energy barrier for the rearrangement of the cubic structure to the tube was found to be only 0.13 eV, and hence the two structures are easily interconvertible.
#### Conflict of Interests
The authors declare no conflict of interests with any financial organization regarding the material presented in the paper.
#### Acknowledgments
The authors thank the Council of Scientific and Industrial Research (CSIR) for financial support (Grant no. 01(2554)/12/EMR-1). Ritu Gaba and Upasana Issar thank the CSIR and the University Grants Commission (UGC), respectively, for Senior Research Fellowships.
#### References
1. A. Jain, V. Kumar, M. Sluiter, and Y. Kawazoe, “First principles studies of magnesium oxide clusters by parallelized Tohoku University Mixed-Basis program TOMBO,” Computational Materials Science, vol. 36, no. 1-2, pp. 171–175, 2006.
2. N. Sharma and R. Kakkar, “Recent advancements in warfare agents/metal oxides surface chemistry and their simulant studies,” Advanced Materials Letters, 2013.
3. P. N. Kapoor, A. K. Bhagi, R. S. Mulukutla, and K. J. Klabunde, Dekker Encyclopedia of Nanoscience & Technology, Marcel Dekker, New York, NY, USA, 2004.
4. A. Khaleel, P. N. Kapoor, and K. J. Klabunde, “Nanocrystalline metal oxides as new adsorbents for air purification,” Nanostructured Materials, vol. 11, no. 4, pp. 459–468, 1999.
5. J. Heyd, J. E. Peralta, and G. E. Scuseria, “Energy band gaps and lattice parameters evaluated with the Heyd-Scuseria-Ernzerhof screened hybrid functional,” Journal of Chemical Physics, vol. 123, no. 17, Article ID 174101, 8 pages, 2005.
6. S. Utamapanya, K. J. Klabunde, and J. R. Schlup, “Nanoscale metal oxide particles/clusters as chemical reagents,” Chemistry of Materials, vol. 3, pp. 175–181, 1991.
7. O. Koper, Y. X. Li, and K. J. Klabunde, “Destructive adsorption of chlorinated hydrocarbons on ultrafine (nanoscale) particles of calcium oxide,” Chemistry of Materials, vol. 5, no. 4, pp. 500–505, 1993.
8. Y. X. Li and K. J. Klabunde, “Nano-scale metal oxide particles as chemical reagents. Destructive adsorption of a chemical agent simulant, dimethyl methylphosphonate, on heat-treated magnesium oxide,” Langmuir, vol. 7, no. 7, pp. 1388–1393, 1991.
9. Y.-X. Li, J. R. Schlup, and K. J. Klabunde, “Fourier transform infrared photoacoustic spectroscopy study of the adsorption of organophosphorus compounds on heat-treated magnesium oxide,” Langmuir, vol. 7, no. 7, pp. 1394–1399, 1991.
10. R. C. Whited, C. J. Flaten, and W. C. Walker, “Exciton thermoreflectance of MgO and CaO,” Solid State Communications, vol. 13, no. 11, pp. 1903–1905, 1973.
11. I. S. Elfimov, S. Yunoki, and G. A. Sawatzky, “Possible path to a new class of ferromagnetic and half-metallic ferromagnetic materials,” Physical Review Letters, vol. 89, no. 21, Article ID 216403, 4 pages, 2002.
12. B. Nagappa and G. T. Chandrappa, “Nanocrystalline CaO as adsorbent to remove COD from paper mill effluent,” Journal of Nanoscience and Nanotechnology, vol. 7, no. 3, pp. 1039–1042, 2007.
13. F. Jin, Y. Liu, and M. D. Christopher, “Barium strontium oxide coated carbon nanotubes as field emitters,” Applied Physics Letters, vol. 90, no. 14, Article ID 143114, 3 pages, 2007.
14. W. A. Saunders, “Structural dissimilarities between small II-VI compound clusters: MgO and CaO,” Physical Review B, vol. 37, no. 11, pp. 6583–6586, 1988.
15. W. A. Saunders, “Molecules and clusters,” Zeitschrift für Physik D, vol. 12, pp. 601–603, 1989.
16. T. P. Martin and T. Bergmann, “Mass spectra of Ca-O and Ba-O clusters,” The Journal of Chemical Physics, vol. 90, no. 11, pp. 6664–6667, 1989.
17. P. J. Ziemann and A. W. Castleman, “Mass-spectrometric study of the formation, evaporation, and structural properties of doubly charged MgO clusters,” Physical Review B, vol. 44, no. 12, pp. 6488–6499, 1991.
18. P. J. Ziemann and A. W. Castleman, “Structures and bonding properties of calcium oxide clusters inferred from mass spectral abundance patterns,” The Journal of Physical Chemistry, vol. 96, no. 11, pp. 4271–4276, 1992.
19. P. J. Ziemann and A. W. Castleman Jr., “Mass spectrometric study of MgO clusters produced by the gas aggregation technique,” Zeitschrift für Physik D, vol. 20, pp. 97–99, 1991.
20. P. J. Ziemann and A. W. Castleman Jr., “Stabilities and structures of gas phase MgO clusters,” Journal of Chemical Physics, vol. 94, no. 1, pp. 718–728, 1991.
21. M. Gutowski, P. Skurski, X. Li, and L.-S. Wang, “${\left(\text{MgO}\right)}_{\text{n}}{\text{\hspace{0.17em}}}^{-}$ (n = 1–5) clusters: multipole-bound anions and photodetachment spectroscopy,” Physical Review Letters, vol. 85, no. 15, pp. 3145–3148, 2000.
22. R. Kakkar, P. N. Kapoor, and K. J. Klabunde, “Theoretical study of the adsorption of formaldehyde on magnesium oxide nanosurfaces: size effects and the role of low-coordinated and defect sites,” The Journal of Physical Chemistry B, vol. 108, no. 47, pp. 18140–18148, 2004.
23. R. Kakkar, P. N. Kapoor, and K. J. Klabunde, “First principles density functional study of the adsorption and dissociation of carbonyl compounds on magnesium oxide nanosurfaces,” Journal of Physical Chemistry B, vol. 110, no. 51, pp. 25941–25949, 2006.
24. R. Dong, X. Chen, X. Wang, and W. Lu, “Structural transition of hexagonal tube to rocksalt for (MgO)3n, $2\le n\le 10$,” Journal of Chemical Physics, vol. 129, no. 4, Article ID 044705, 5 pages, 2008.
25. M. Srnec and R. Zahradník, “Small group IIa-VIa clusters and related systems: a theoretical study of physical properties, reactivity, and electronic spectra,” European Journal of Inorganic Chemistry, vol. 2007, no. 11, pp. 1529–1543, 2007.
26. M. Wilson, “Stability of small MgO nanotube clusters: predictions of a transferable ionic potential model,” The Journal of Physical Chemistry B, vol. 101, no. 25, pp. 4917–4924, 1997.
27. M. Wilson, P. A. Madden, N. C. Pyper, and J. H. Harding, “Molecular dynamics simulations of compressible ions,” Journal of Chemical Physics, vol. 104, no. 20, pp. 8068–8081, 1996.
28. E. De La Puente, A. Aguado, A. Ayuela, and J. M. López, “Structural and electronic properties of small neutral (MgO)n clusters,” Physical Review B, vol. 56, no. 12, pp. 7607–7614, 1997.
29. M.-J. Malliavin and C. Coudray, “Ab initio calculations on (MgO)n, (CaO)n, and (NaCl)n clusters (n = 1–6),” Journal of Chemical Physics, vol. 106, no. 6, pp. 2323–2330, 1997.
30. J. M. Recio, R. Pandey, A. Ayuela, and A. B. Kunz, “Molecular orbital calculations on (MgO)n and ${\left(\text{MgO}\right)}_{\text{n}}{\text{\hspace{0.17em}}}^{+}$ clusters (n = 1–13),” The Journal of Chemical Physics, vol. 98, no. 6, pp. 4783–4792, 1993.
31. J. M. Recio and R. Pandey, “Ab initio study of neutral and ionized microclusters of MgO,” Physical Review A, vol. 47, no. 3, pp. 2075–2082, 1993.
32. A. Aguado and J. M. López, “Structures and stabilities of CaO and MgO clusters and cluster ions: an alternative interpretation of the experimental mass spectra,” The Journal of Physical Chemistry B, vol. 104, no. 35, pp. 8398–8405, 2000.
33. B. Delley, “An all electron numerical method for solving the local density functional for polyatomic molecules,” Journal of Chemical Physics, vol. 92, no. 1, pp. 508–517, 1990.
34. B. Delley, “Analytic energy derivatives in the numerical local density functional approach,” Journal of Chemical Physics, vol. 94, no. 11, pp. 7245–7250, 1991.
35. B. Delley, “Fast calculation of electrostatics in crystals and large molecules,” The Journal of Physical Chemistry, vol. 100, no. 15, pp. 6107–6110, 1996.
36. B. Delley, “From molecules to solids with the DMol3 approach,” Journal of Chemical Physics, vol. 113, no. 18, pp. 7756–7764, 2000.
37. J. P. Perdew, J. A. Chevary, S. H. Vosko et al., “Atoms, molecules, solids, and surfaces: applications of the generalized gradient approximation for exchange and correlation,” Physical Review B, vol. 46, no. 11, pp. 6671–6687, 1992.
38. J. P. Perdew and Y. Wang, “Accurate and simple analytic representation of the electron-gas correlation energy,” Physical Review B, vol. 45, no. 23, pp. 13244–13249, 1992.
39. R. Kakkar, R. Grover, and P. Gahlot, “Density functional study of the properties of isomeric aminophenylhydroxamic acids and their copper (II) complexes,” Polyhedron, vol. 25, no. 3, pp. 759–766, 2006.
40. F. Bawa and I. Panas, “Competing pathways for MgO, CaO, SrO, and BaO nanocluster growth,” Physical Chemistry Chemical Physics, vol. 4, pp. 103–108, 2002.
41. Á. Vibók and G. J. Halász, “Parametrization of complex absorbing potentials for time-dependent quantum dynamics using multi-step potentials,” Physical Chemistry Chemical Physics, vol. 3, pp. 3042–3047, 2001.
42. S. Veliah, R. Pandey, Y. S. Li, J. M. Newsam, and B. Vessal, “Density functional study of structural and electronic properties of cube-like MgO clusters,” Chemical Physics Letters, vol. 235, no. 1-2, pp. 53–57, 1995.
43. K. P. Huber and G. Herzberg, Molecular Spectra and Molecular Structure, IV. Constants of Diatomic Molecules, van Nostrand Reinhold, New York, NY, USA, 1979.
44. M. C. Day and J. Selbin, Theoretical Inorganic Chemistry, chapter 4, Reinhold, New York, NY, USA; Chapman & Hall, London, UK, 1962.
45. D. J. Driscoll, W. Martir, J. X. Wang, and J. H. Lunsford, “Formation of gas-phase methyl radicals over magnesium oxide,” Journal of the American Chemical Society, vol. 107, no. 1, pp. 58–63, 1985.
46. T. Ito and J. H. Lunsford, “Synthesis of ethylene and ethane by partial oxidation of methane over lithium-doped magnesium oxide,” Nature, vol. 314, pp. 721–722, 1985.
47. T. Ito, J. X. Wang, C. H. Lin, and J. H. Lunsford, “Oxidative dimerization of methane over a lithium-promoted magnesium oxide catalyst,” Journal of the American Chemical Society, vol. 107, no. 18, pp. 5062–5068, 1985.
48. D. Van Heijnsbergen, G. Von Helden, G. Meijer, and M. A. Duncan, “Infrared resonance-enhanced multiphoton ionization spectroscopy of magnesium oxide clusters,” Journal of Chemical Physics, vol. 116, no. 6, pp. 2400–2406, 2002.
49. P. A. Cox and A. A. Williams, “HREELS studies of simple ionic solids,” Journal of Electron Spectroscopy and Related Phenomena, vol. 39, pp. 45–48, 1986.
50. V. E. Henrich and P. A. Cox, The Surface Science of Metal Oxides, Cambridge University Press, Cambridge, Mass, USA, 1994.
51. G. Pacchioni, C. Sousa, F. Illas, P. S. Bagus, and F. Parmigiani, “Measures of ionicity of alkaline-earth oxides from the analysis of ab initio cluster wave functions,” Physical Review B, vol. 48, no. 16, pp. 11573–11582, 1993.
52. P. V. R. Schleyer, C. Maerker, A. Dransfeld, H. Jiao, and N. J. R. Van Eikema Hommes, “Nucleus-independent chemical shifts: a simple and efficient aromaticity probe,” Journal of the American Chemical Society, vol. 118, no. 26, pp. 6317–6318, 1996.
53. R. Kakkar and C. Singh, “Theoretical study of the kojic acid structure in gas phase and aqueous solution,” International Journal of Quantum Chemistry, vol. 111, no. 15, pp. 4318–4329, 2011.
54. R. Kakkar, M. Bhandari, and R. Gaba, “Tautomeric transformations and reactivity of alloxan,” Computational and Theoretical Chemistry, vol. 986, pp. 14–24, 2012.
55. G. W. Wang and H. Hattori, “Reaction of adsorbed carbon monoxide with hydrogen on magnesium oxide,” Journal of the Chemical Society, Faraday Transactions 1, vol. 80, pp. 1039–1047, 1984.
56. G. Bilalbegović, “Structural and electronic properties of MgO nanotube clusters,” Physical Review B, vol. 70, no. 4, Article ID 045407, 6 pages, 2004. |
## Recommended Posts
Hiya again guys,
I have a niggling problem with my CSM. The middle an far cascades noticeably "swim" as you move around. It's not really noticeable on the close one but i'm that's probably just due to the filtering and the factor it has such a high quality map to work from.
Anyway I have 3 cascades and i'm using a bounding box for each. They all overlap with they near clip and this works fairly nicely but I was looking at the csm sample that some nicely ported over to monogame. The problem is I can't get the sphere based bounding boxes working at all for me. My debug rendertargets for all the cascades are just blank. If I switch my camera to be the shadow light camera (the sun) it seems fine (and it works using my bounding box method so i think that's fine). I THINK i'm just calculating the bounding box (using a sphere) incorrectly some how.
public void GenerateCSMOrthoSlice(float pfarClip)
{
Vector3[] frustumCornersWS = new Vector3[8];
Vector3[] frustumCornersLS = new Vector3[8];
BoundingFrustum viewFrustum = new BoundingFrustum(_Camera.CameraView * Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, _Camera._aspectRatio, 10, pfarClip));
frustumCornersWS = viewFrustum.GetCorners();
Vector3 frustumCentroid = new Vector3(0, 0, 0);
for (int i = 0; i < 8; i++)
frustumCentroid += frustumCornersWS[i];
frustumCentroid /= 8;
lightsView = Matrix.Identity;
lightsViewProjectionMatrix = Matrix.Identity;
ShadowLightPos = frustumCentroid + (SunlightDirection * 100);
Vector3 mins = frustumCornersLS[0];
Vector3 maxes = frustumCornersLS[0];
for (int i = 0; i < 8; i++)
{
if (frustumCornersLS[i].X > maxes.X)
maxes.X = frustumCornersLS[i].X;
else if (frustumCornersLS[i].X < mins.X)
mins.X = frustumCornersLS[i].X;
if (frustumCornersLS[i].Y > maxes.Y)
maxes.Y = frustumCornersLS[i].Y;
else if (frustumCornersLS[i].Y < mins.Y)
mins.Y = frustumCornersLS[i].Y;
if (frustumCornersLS[i].Z > maxes.Z)
maxes.Z = frustumCornersLS[i].Z;
else if (frustumCornersLS[i].Z < mins.Z)
mins.Z = frustumCornersLS[i].Z;
}
float diagonalLength = (frustumCornersWS[0] - frustumCornersWS[6]).Length();
diagonalLength += 2; //Without this, the shadow map isn't big enough in the world.
float worldsUnitsPerTexel = diagonalLength / (float)4096;
Vector3 vBorderOffset = (new Vector3(diagonalLength, diagonalLength, diagonalLength) - (maxes - mins)) * 0.5f;
maxes += vBorderOffset;
mins -= vBorderOffset;
mins /= worldsUnitsPerTexel;
mins.X = (float)Math.Floor(mins.X);
mins.Y = (float)Math.Floor(mins.Y);
mins.Z = (float)Math.Floor(mins.Z);
mins *= worldsUnitsPerTexel;
maxes /= worldsUnitsPerTexel;
maxes.X = (float)Math.Floor(maxes.X);
maxes.Y = (float)Math.Floor(maxes.Y);
maxes.Z = (float)Math.Floor(maxes.Z);
maxes *= worldsUnitsPerTexel;
ShadowLightProjection = Matrix.CreateOrthographicOffCenter(mins.X, maxes.X, mins.Y, maxes.Y, -maxes.Z - 500f, -mins.Z);
}
What I tried (which doesn't work at all) is:
public void GenerateCSMOrthoSlice(float pfarClip)
{
Vector3 minExtents = Vector3.Zero;
Vector3 maxExtents = Vector3.Zero;
Vector3[] frustumCornersWS = new Vector3[8];
Vector3[] frustumCornersLS = new Vector3[8];
BoundingFrustum viewFrustum = new BoundingFrustum(_Camera.CameraView * Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, _Camera._aspectRatio, 10, pfarClip));
frustumCornersWS = viewFrustum.GetCorners();
Vector3 frustumCentroid = new Vector3(0, 0, 0);
for (int i = 0; i < 8; i++)
frustumCentroid += frustumCornersWS[i];
frustumCentroid /= 8;
{
// This needs to be constant for it to be stable
var upDir = Vector3.Up;
// Calculate the radius of a bounding sphere surrounding the frustum corners
for (var i = 0; i < 8; ++i)
{
var dist = (_frustumCorners[i] - frustumCentroid).Length();
}
minExtents = -maxExtents;
}
lightsView = Matrix.Identity;
lightsViewProjectionMatrix = Matrix.Identity;
//Vector3 sunlightdirection = new Vector3(0.21f, 0.11f, -0.5f);
ShadowLightPos = frustumCentroid + (SunlightDirection * 100);
Vector3 mins = frustumCornersLS[0];
Vector3 maxes = frustumCornersLS[0];
for (int i = 0; i < 8; i++)
{
if (frustumCornersLS[i].X > maxes.X)
maxes.X = frustumCornersLS[i].X;
else if (frustumCornersLS[i].X < mins.X)
mins.X = frustumCornersLS[i].X;
if (frustumCornersLS[i].Y > maxes.Y)
maxes.Y = frustumCornersLS[i].Y;
else if (frustumCornersLS[i].Y < mins.Y)
mins.Y = frustumCornersLS[i].Y;
if (frustumCornersLS[i].Z > maxes.Z)
maxes.Z = frustumCornersLS[i].Z;
else if (frustumCornersLS[i].Z < mins.Z)
mins.Z = frustumCornersLS[i].Z;
}
float diagonalLength = (frustumCornersWS[0] - frustumCornersWS[6]).Length();
diagonalLength += 2; //Without this, the shadow map isn't big enough in the world.
float worldsUnitsPerTexel = diagonalLength / (float)4096;
Vector3 vBorderOffset = (new Vector3(diagonalLength, diagonalLength, diagonalLength) - (maxes - mins)) * 0.5f;
maxes += vBorderOffset;
mins -= vBorderOffset;
mins /= worldsUnitsPerTexel;
mins.X = (float)Math.Floor(mins.X);
mins.Y = (float)Math.Floor(mins.Y);
mins.Z = (float)Math.Floor(mins.Z);
mins *= worldsUnitsPerTexel;
maxes /= worldsUnitsPerTexel;
maxes.X = (float)Math.Floor(maxes.X);
maxes.Y = (float)Math.Floor(maxes.Y);
maxes.Z = (float)Math.Floor(maxes.Z);
maxes *= worldsUnitsPerTexel;
ShadowLightProjection = Matrix.CreateOrthographicOffCenter(mins.X, maxes.X, mins.Y, maxes.Y, -maxes.Z - 500f, -mins.Z);
{
ShadowLightProjection = Matrix.CreateOrthographicOffCenter(minExtents.X, minExtents.Y, maxExtents.X, maxExtents.Y, 0.0f, pfarClip);
// Create the rounding matrix, by projecting the world-space origin and determining
// the fractional offset in texel space
var shadowOrigin = new Vector4(0.0f, 0.0f, 0.0f, 1.0f);
var roundOffset = roundedOrigin - shadowOrigin;
roundOffset = roundOffset * (2.0f / 4096);
roundOffset.Z = 0.0f;
roundOffset.W = 0.0f;
}
//
}
##### Share on other sites
I'm no MonoGame expert, but looking at your code it looks like you mixed up the parameters to Matrix.CreateOrthographicOffCenter() in the stabilized version.
##### Share on other sites
Thanks, yes. Well spotted. Unfortunately that had no effect.
Shimmery shadows have been bugging me for ages but I can't seem to figure out how to use the sphere method of making my cascades
##### Share on other sites
If anyone could give me some code/pseudo code for finding the bounding sphere of my shadow camera that would do. I was reading shaderx7 on this subject. Which was interedting but the math is over my head. I don't mind the wasted space. By the time my game is finished games will be able to support one giant texture lol
Thanks
##### Share on other sites
Posted (edited)
Here is my code that is used in production (So I know it works ;-) )
V4 vMin = vCorners[0];
V4 vMax = vCorners[0];
for(int j = 1; j<8; j++) {
vMin = VMin(vMin, vCorners[j]);
vMax = VMax(vMax, vCorners[j]);
}
V4 vSize = vMax - vMin;
float fRadius = VLen3(vSize).x() * 0.5f;
// Snap center to shadow texel
// This is done by transforming center of CSM frustum into light post projection (texel space) and
// perform snapping in this space.
M44 mCameraToLight = MInverseAffine(mLightToCamera);
V4 vCenterCamera = VLerp(vMin, vMax, VSplat(0.5f));
V4 vCenterLight = VTransform43(mCameraToLight, vCenterCamera);
vCenterTexel = VProject(vCenterTexel);
vSnap = VFloor(vSnap);
V4 vSnapOffset = VLoadZero() - vSnap;
mSnapTrans.vTrans = VXY01(vSnapOffset);
Hope it makes sense.
BTW: Looking at your code I think the issue is the way you apply the offset directly to the matrix instead of doing a multiplication with a translation matrix (Like I do in the last line)
Henning
Oh, and I now use the quantization trick instead of the code above to get more precision (http://dev.theomader.com/stable-csm/)
Edited by semler
##### Share on other sites
Thanks, your help is much appreciated. I'm not familiar with that language but i'll see if I can work it out (I'm just a c# xna/monogame guy)
##### Share on other sites
This ia video of the problem. I'm not if i'm barking up the wrong tree. But shadows are very "jiggly" on some objects. I wanted to try the sphere based approach. Which from my understanding is making a bounding sphere from the shadow camera view frustrum then creating my usual orthographic projection for my cascade from that. Am I undestanding it correctly?
Here's a video of the annoying problem I have. Shortly into the video i turn on the rendertarget display so you can see the 3 cascade render targets on the right. As you can see theres a lot of jiggle going on. Depth Bias doesn't seem to make any difference really.
##### Share on other sites
I'm a complete noob here but i see that you rotate your shadow maps with camera orientation and this way it's impossible to avoid jittering. I think you have to keep the shadow map orientation constant in worldspace and only adapt it to the position of the camera ignoring player view direction. So when you rotate but don't move, you would see the corners of cascade transitions fixed to the floor. Does this make sense?
##### Share on other sites
Posted (edited)
I think you're probably correct/ That's what I was referring to with using a sphere from the camera frustrum to build the bounding box for my orthographic projection for the shadow like.
I think i'm missing something though. I've found some examples here and there but I'm just not getting it. Maybe some pseudo code would help me.
Edited by skyemaidstone
##### Share on other sites
Semlers linked blog contains more articles and code, however...
Looking at code snippets often raises more questions than answers: What conventions are used here (is this a camera to world matrix or world to camera? what multiplication order do they use? etc. etc.) - mostly you have to care for the deatils yourself and after you undrestood the concept it's only the details that's left.
I recommend you simplyfy the problem instead so cou can solve it more easily in your own:
1. Ignore frustum and any bounds and make each cascade just centered to the texel closest to the camera. (visualizing texel grid on a simple ground plane should help to set this up. And if this works all math is known.)
2. If you get this to work robustly so the cascades move with the camera without jitter the main problem is solved and you can start to optimize it for the frustum, which shouldn't be that hard at this point.
##### Share on other sites
Thanks for the answers. That's exactly what I'm trying now Joe, ie simplifying the problem.
If I can get rid of the jiggling for 1 CSM "slice" then I'm there really. Filtering and stuff works fine already.
I'll read some more and see if I can figure out how to make my slice ignore camera rotation. I've tried moving them to texel increments (based on one of the links above CSM which i've read and reread) but the problem isn't real;y improved.
Increasing my shadow map size to something ridiculous didn't improve the jiggling either so it does seem to be the rotation causing the problem. Hence I was trying to use the sphere based approach for bounding sphere but i just don't "get it".
Or maybe I'm just doing the "move bounding box in texel increments" incorrectly.. I shall persevere.
##### Share on other sites
Ok I came back to this after working on more interesting parts of my game and although I've improved it by using a sphere based approach to making the projections it hasn't really improved the shimmering that much. In fact if i comment out my texel snapping part it makes little difference. I'm getting a bit stumped. Maybe I've been looking at it too long. It's acceptable now I guess but i'd like it to be totally rock solid ideally
This creates the projection and view for each cascade. Any ideas what i've missed?
public void GenerateCSMOrthoSliceTS(float pNearClip, float pfarClip)
{
Vector3[] frustumCorners = new Vector3[8];
Matrix mCameraViewProj = _Camera.CameraView;
mCameraViewProj *= Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, _Camera._aspectRatio, pNearClip, pfarClip);
BoundingFrustum oCameraViewProjFrustum = new BoundingFrustum(mCameraViewProj);
frustumCorners = oCameraViewProjFrustum.GetCorners();
Vector3 frustumCenter = new Vector3(0, 0, 0);
for (int i = 0; i < 8; i++)
frustumCenter += frustumCorners[i];
frustumCenter /= 8;
lightsView = Matrix.Identity;
lightsViewProjectionMatrix = Matrix.Identity;
float radius = (frustumCorners[0] - frustumCorners[6]).Length() / 2.0f;
float texelsPerUnit = (float)4096 / (radius * 2.0f);
Matrix mTexelScaling = Matrix.CreateScale(texelsPerUnit);
SunlightDirection.Normalize();
Vector3 baselookAt = new Vector3(SunlightDirection.X, SunlightDirection.Y, SunlightDirection.Z);
Matrix mLookAt = Matrix.CreateLookAt(Vector3.Zero, baselookAt, Vector3.Up);
mLookAt = Matrix.Multiply(mTexelScaling, mLookAt);
Matrix mLookAtInv = Matrix.Invert(mLookAt);
frustumCenter = Vector3.Transform(frustumCenter, mLookAt);
frustumCenter.X = (float)Math.Floor(frustumCenter.X); //clamp to texel increment
frustumCenter.Y = (float)Math.Floor(frustumCenter.Y); //clamp to texel increment
frustumCenter = Vector3.Transform(frustumCenter, mLookAtInv);
Vector3 eye = frustumCenter + (SunlightDirection * radius * 2.0f);
}
## Create an account
Register a new account
• ### Forum Statistics
• Total Topics
628730
• Total Posts
2984423
• ### Similar Content
• THE PROJECT
INT is a 3D Sci-fi RPG with a strong emphasis on story, role playing, and innovative RPG features such as randomized companions. The focus is on the journey through a war-torn world with fast-paced combat against hordes of enemies. The player must accomplish quests like a traditional RPG, complete objectives, and meet lively crew members who will aid in the player's survival. Throughout the game you can side and complete missions through criminal cartels, and the two major combatants, the UCE and ACP, of the Interstellar Civil War.
Please note that all of our current positions are remote work. You will not be required to travel.
Talent Needed
Unity Engine Programmer
3D Animator
We have made great strides in the year 2017! INT has received a comprehensive face-lift compared to the start of the year. We look forward to a productive, fruitful year 2018!
Revenue-Share
This is the perfect opportunity to get into the game development industry. Being an Indie team we do not have the creative restrictions often imposed by publishers or other third parties. We are extremely conscientious of our work and continuously uphold a high level of quality throughout our project.
We are unable to offer wages or per-item payments at this time. However revenue-sharing from crowd-funding is offered to team members who contribute 15-20 hours per week to company projects, as well as maintain constant communication and adhere to deadlines. Currently the crowd-funding campaign is scheduled for the year 2018. Your understanding is dearly appreciated.
Thank you for your time! We look forward to hearing from you!
John Shen
Starboard Games LLC
• Hello people of gamedev.net
Me and my team have been working on a MMORPG game with Unreal Engine 4 for quite some time now.
We are seeking beta tester's and have beta key's available to people who sign up on our website.
Feel free to register on our forums, We can talk about the game and help everyone get a better idea of what type of game it is.
Legion is a 3D fantasy MMORPG that has features including massive scale battles, unique characters and monsters, customization of avatars, special equipment and more. Players choose between the starter stats of Warrior, Magician, Archer and character advancement occurs through a mix of questing, PvP, Guild Wars, and hunting, depending upon player preference. In Legion, completely open PvP battles take place between members of the two warring factions.
We plan to make this game very competitive and exciting
• By Matuda
Hello!
Trying to create a physics puzzle game in my "free" time.
So far it's going very steady, but slow.
Hope to get some feedback from you!
Area 86 is a physics-based game, that lets you control a robot at a secret place in space.
From simple item moving to custom imagined solutions with item picking, throwing, combining and activating!
Explore & examine all possibilities each place has to offer and do your best to get further.
But remember, each action has consequences and thus could break or make something unexpected.
Quick overlook of main features:
Physics-based gameplay with no bugs or whatsoever Tasks that give you more clue on how to do things wrong Controllable robot who can be blamed for all consequences Includes more than 1 level and each level contains less than 12 possible tasks to complete [ not in free version ] Secret places and hidden objects for extra challenge
One fully completable level with 6 tasks and 2 hidden special items to discover.
From the task list, 2 are main tasks which you should complete to get further and then there are 4 other tasks that should challenge your thinking.
One of the secret items is visible instant, but you need to figure out how to collect it, while the other special item is hiding.
Another extra feature is visual hints, that should force your thinking of discovering features.
• well, i have started developing games last year, alone , I made a singleplayer 3d openworld rpg on unity you can look at it on googleplaystore ( kooru stone rpg ) whatever, this year, i wanted to make mmo, which gone really fine until I first try real hosting, I was working on "wamp" until then. The reason i am desperate now is that the way my game works.
On my pc, using wamp mysql , with localhost as host for my game, i was testing my mmorpg with using andorid emulators, ofcourse no lag no issues no restrictions, beautiful dream... But then, I wanted to get real host from web, so, I rent a basic, cheaphest ever web host ( 10\$ year ), and transferred my php files along with sql database.
So, I launched the game, still no issues, tried to handle 2-3 players by using my pc, phone, friend's phone...
After a while, ( after really short time (3-4mins)) host started not to respond, beacause those web hosting were not fit to handle mmos, i predicted that.
now what i am explaining is that my game works like this and asking what way should i use to handle it :
- Creates web request ( like : webhost.com/game/getplayerdata.php?ID=2 )
-Reads request ( request result be like = "ID2-GoodGuyXx-23-123-4-123-43 )
-Builds player using result string
-does similar requests REEAALY FREQUENTLY ( total requests of 8 - 12 times per seconds )
With my current ultimate cheap web hosting, i can handle 2 players with low lag ( lol ) but, i want to handle around 20-100 players,
just need a clear path, i have been struggling with google cloud sql and other vps server dedicated server options, i dont wanna pay much and get ripped off.
• Hi,
I have a triangle,oriented in a 3D Plane i.e. I have my vertices as (x1,y1,z1) ; (x2,y2,z2) and (x3,y3,z3)
I am trying to convert this triangular facet to voxelised model i.e.
Along each edge,I am applying Bresenhams 3D Line algorithm and generating intermediate points.After generating intermediate points, I want to fill the inside region.
I have been searching for some algorithm like flood filling,but did not find anything relevant till now.
I would be really glad,if some one can provide an algorithm for achieving this.
I basically have a List of tuple for storing all the (x,y,z) data created along the edges.(generated using Brsenhams 3D Line algorithm).
Now,I want an algorithm,which creates cubes in the inside region.
• 25
• 11
• 10
• 16
• 14 |
# Solved (Free): A clinical trial is run to compare the effectiveness of an experimental drug in reducing preterm delivery to a drug considered standard care and to piacebo
#### ByDr. Raju Chaudhari
Mar 13, 2021
A clinical trial is run to compare the effectiveness of an experimental drug in reducing preterm delivery to a drug considered standard care and to piacebo. Pregnant women are enrolled and randomly assigned to receive the experimental drug, the standard drug or placebo. Women are followed through delivery and classified as delivering preterm (< 37 weeks) or not. The resulting data are shown below.
Preterm Delivery Experimental Drug Standard Drug Placebo
Yes 17 23 35
No 83 77 65
Previous studies have shown that approximately 32% of women deliver prematurely without treatment. Is the proportion of women delivering prematurely significantly higher in the placebo group? Run the test at a 5 % level of significance.
### Solution
Given that $n = 100$, $X= 35$.
The sample proportion is
$$\hat{p}=\frac{X}{n}=\frac{35}{100}=0.35$$.
Hypothesis Testing Problem
The hypothesis testing problem is
$H_0 : p = 0.32$ against $H_1 : p > 0.32$ ($\text{right-tailed}$)
Test Statistic
The test statistic for testing above hypothesis testing problem is
\begin{aligned} Z & = \frac{\hat{p} - p}{\sqrt{\frac{p(1-p)}{n}}} \end{aligned}
which follows $N(0,1)$ distribution.
Significance Level
The significance level is $\alpha = 0.05$.
Critical values
As the alternative hypothesis is $\textit{right-tailed}$, the critical value of $Z$ $\text{ is }$ $\text{1.64}$.
The rejection region (i.e. critical region) for the hypothesis testing problem is $\text{Z > 1.64}$.
Computation
The test statistic is
\begin{aligned} Z & = \frac{\hat{p}-p}{\sqrt{\frac{p(1-p)}{n}}}\\ &= \frac{0.35-0.32}{\sqrt{\frac{0.32* (1-0.32)}{100}}}\\ & =0.643 \end{aligned}
Decision
The test statistic is $Z =0.643$ which falls $outside$ the critical region, we $\text{fail to reject}$ the null hypothesis.
$p$-value approach:
This is a $\text{right-tailed}$ test, so the p-value is the area to the left of the test statistic ($Z=0.643$). Thus the $p$-value = $P(Z < 0.643) =0.2601$.
The p-value is $0.2601$ which is $\text{greater than}$ the significance level of $\alpha = 0.05$, we $\text{fail to reject}$ the null hypothesis.
We conclude that the proportion of women delivering prematurely is not significantly higher in the placebo group at 0.05 level of significance. |
# monotonic function can only have simple discontinuity
I am self-studying Rudin, Principles of Mathematical Analysis. I am having trouble going through the theorem saying that monotonical functions can only have simple discontinuity, i.e., Suppose $$f$$ is monotonic and discontinuous at $$x$$, then $$f(x^+)$$ and $$f(x^-)$$ must exist.
In the proof, it is argued that, for any $$\epsilon>0$$, by the definition of least upper bound $$A:=\sup_{t\in(a,x)} f(t)$$, $$\exists \delta>0$$ s.t. $$a and $$A-\epsilon\le f(x-\delta)\le A$$.
From my understanding, sup is the least upper bound. By the definition itself, it doesn't mean that the least upper bound would be approached with an $$\epsilon$$-ball.
The theorem is, of course, true. I am thinking of using the facts like: $$A$$ is sup $$\Rightarrow$$ A in the closure of $$range(f(t): a. Also, $$A$$ is not achieved. If otherwise A is achieved at $$f(y)$$ with $$a, then $$f((y+x)/2)$$ would be larger than $$A$$ by monotonicity. These conclude that A is a limit point of range$$(f(t): a, which in turn is followed by the original proof.
I am wondering if my thought is necessary, or there's any quick fact to support the claim in the book.
It is sometimes a struggle for me to go through every detail of Rudin's book. It would also be much appreciated if someone can point to reference textbooks that could complement it. Thanks!
• $f$ is increasing and $\forall z<x,f(z)\le f(x)$ so $f(z)$ has a left limit. – Yves Daoust May 17 at 14:42
If $$A=\sup_{t\in(a,x)}f(t)$$, and if $$\varepsilon>0$$, then $$A-\varepsilon and therefore there is some $$x_0\in(a,x)$$ such that $$f(x_0)>A-\varepsilon$$. So, let $$\delta=x-x_0$$. Then $$a. Besides, $$x-\delta=x_0$$ and therefore $$f(x-\delta)=f(x_0)>A-\varepsilon$$. And, of course, $$f(x_0)\leqslant A$$. So, yes,$$a
• Hi, thanks for the reply. I can only convince myself that there's such x_0 only if A is a limit point of range{ f(t): a<t<x}. Otherwise, f(x) might drop suddenly from A such that f(x) is vacant in the epsilon-ball of A. I am just wondering if there's any quick fact to circumvent my 4-line reasoning (or if my reasoning is true). – Mou May 16 at 18:56
• The supremum is a limit point; this is proven in Rudin (look for the proof that says that a closed and bounded set contains it supremum). – EpsilonDelta May 16 at 20:04
• Since $A$ is the least upper bound of the set $\{f(t)\,|\,t\in(a,x)\}$ and since $A-\varepsilon<A$, then $A-\varepsilon$ is not an upper bound of that set. And this means then there is a $x_0\in(a,x)$ such that $f(x_0)>A-\varepsilon$. Where is the flaw in this argument? – José Carlos Santos May 16 at 20:04
• Hi @JoséCarlosSantos, thanks! no flaws. I now understand. I didn't get the point that $A-\epsilon$ is not an upper bound, which means there exists a slightly larger value. Thanks again! – Mou May 18 at 4:56
• @EpsilonDelta this is probably not true? Let $E=[0,1]\cup{2}$. If I understand correctly, sup E = 2, but 2 is not a limit point. But of course, $[0,1] \cup {2}$ is bounded and closed, which also contains its sup. – Mou May 18 at 5:01
You need to get clarity on a few things here. The terms "supremum" and "least upper bound" are synonymous and can be used interchangeably. Further the latter term is almost self explanatory in the sense that if $$M=\sup A$$ then $$M$$ is the least of all upper bounds of $$A$$ which further means that numbers less than $$M$$ are not upper bounds for $$A$$ and are therefore exceeded by some members from $$A$$.
Further if $$f$$ is monotonically increasing then it means that $$x and not that $$x. If you want to mean the latter then use the word "strictly increasing".
Consider the set $$S=\{f(t) \mid a which is bounded above by $$f(x)$$ and thus $$A=\sup S$$ exists and $$A\leq f(x)$$ (remember $$A$$ is the least upper bound of $$S$$ whereas $$f(x)$$ is one of the upper bounds of $$S$$). Let $$\epsilon>0$$ then $$A-\epsilon$$ is less than $$A$$ and hence is not an upper bound for $$S$$ and is therefore exceeded by some member of $$S$$. Hence we have a $$t_0$$ such that $$a such that $$f(t_0) >A-\epsilon$$. Let $$\delta=x-t_0$$ then $$\delta>0$$ and we have $$A-\epsilon By monotone nature of $$f$$ we have $$A-\epsilon for all $$t$$ with $$x-\delta. This proves that $$f(x-) =\lim_{t\to x^-} f(t) =A$$.
By the way why do you think the value $$A$$ can't be achieved? The number $$A=\sup S$$ can also be member of $$S$$ and it may also be the case $$A=f(x)$$.
Also note that the supremum is not necessarily a limit point of the set. If $$A=\sup S$$ and $$A\notin S$$ then $$A$$ is a limit point of $$S$$. If $$A\in S$$ then $$A$$ may or may not be a limit point of $$S$$. |
Example 8-2-6 - Maple Help
Chapter 8: Applications of Triple Integration
Section 8.2: Average Value
Example 8.2.6
Obtain the average value of $f\left(\mathrm{ρ},\mathrm{φ},\mathrm{θ}\right)={\mathrm{ρ}}^{2}$ over $R$, the region that is bounded inside by the surface $\mathrm{ρ}=1+\mathrm{cos}\left(\mathrm{φ}\right)$ and outside by the sphere $\mathrm{ρ}=2$. (The variables $\left(\mathrm{ρ},\mathrm{φ},\mathrm{θ}\right)$ are spherical coordinates.) (See Example 8.1.14.) |
#### Learning Goals
In this chapter, you will learn to:
• Use the suffixes -tion and -ize to understand meaning
• Draw inferences and conclusions
• Summarize information from a text
• Discuss and respond to the digital story 7th Word
• Identify and correct sentence fragments
• Use subject-verb agreement
• Write a descriptive paragraph that describes an important place in your life
• What is your favourite kind of story: funny, scary, dramatic, fantasy, or romantic?
• Try to think of a famous story (book, movie, TV show, video game) about:
• a relationship
• an accomplishment
• a special place
• a job or hobby
• a discovery
• How do you feel when your instructor asks you to write a story?
## Vocabulary
###### Scan What’s Your Story? to find a bold word for each of the following.
1. A group of people or things that are similar in some way
2. A person who is involved in an activity
3. Changed or affected by someone or something
4. Produced or caused something
5. Producing good effects on the body or mind
6. So confusing or difficult you feel like you can’t do it
7. Something that greatly affects people’s emotions
8. Succeeding at something by working hard
9. The act of saying who or what something is
10. Very useful, helpful, or important
Check your work with the Answer Key at the end of this chapter.
#### Word Pattern
tion is a suffix that means “the act of.”
ize is a suffix that means “make or become.”
###### Use the vocabulary words and affixes above to build a word for each definition below.
11. ___________________________: the act of participating
12. ___________________________: to make categories
13. ___________________________: the act of identifying
14. ___________________________: to make a drama
Check your work with the Answer Key at the end of this chapter.
Readers check their understanding of a text while they read. One strategy is to stop after each paragraph and try to retell the main idea in your own words.
1. A good way to check your understanding of a text is to summarize it. A summary is an explanation of the main ideas of a text. A summary does not include details or examples. It does not include your opinion of the text. Write a summary of the six types of stories described in this chapter.
2. An inference is an educated guess about what a text is saying, using your knowledge and logic. What’s Your Story suggests that students should avoid exploring personal stories with lots of difficult emotions when at school. Make an inference about why this might be a good idea.
3. As we read, we form our own thoughts about the text. We form judgments, or opinions. This is often called drawing conclusions. Draw a conclusion about which category 7th Word fits into best. Give reasons.
Check your work with the Answer Key at the end of this chapter.
Another way to check our understanding of a text is to apply what we have learned. Try this:
1. In the text, locate the five activities Joe Lambert recommends for finding a story to tell.
2. Choose one of these activities and try it.
3. Play with your story ideas until you find the seed of a story you want to tell.
4. Write down the basics of your story idea, so that you don’t forget them later.
## Sentence Fragments
#### Grammar Rule
A complete sentence needs three things: a subject, a verb, and a complete thought. If a sentence is missing one of these, it is called a fragment. Sentence fragments are a common mistake in people’s writing.
• This sentence is missing a subject: Is the second-largest country in the world.
Fixed: Canada is the second-largest country in the world.
• This sentence is missing a verb: James Naismith, the inventor of basketball.
Fixed: James Naismith was the inventor of basketball.
• This sentence is missing a complete thought: Unless it rains.
Fixed: We will have a picnic tomorrow unless it rains.
###### Are these complete sentences or fragments?
1. The photo on the wall.
2. The car had a flat tire.
3. In a hurry.
4. Sleeping until noon.
5. I think I lost a $50.00 bill. 6. On Tuesday morning. 7. A sale on tomatoes. 8. Sofia’s ice cream melted in the sun. 9. We are cheering for the Toronto Blue Jays. ###### What is missing from these sentences: a subject, verb, or complete thought? 10. If you have a sunburn. 11. Will send you an email. 12. Since it is a holiday. 13. The tiger at the zoo. 14. When we walk the dog. Check your work with the Answer Key at the end of this chapter. ## Subject-Verb Agreement #### Grammar Rule You have learned that subject refers to the people, places, or things that do the action in a sentence. In this lesson, you’ll study how verbs must “agree” with their subjects. • If the subject is singular, then the verb must be singular. Example: She listens to the radio every morning. She is the singular subject, and listens is the singular verb. • If the subject is plural, then the verb must be plural. Example: They listen to the radio every morning. They is the plural subject, and listen is the plural verb. Collective nouns are nouns that name a group of persons or things. A collective noun is often considered to be singular. band crowd gang staff team pair Non-count nouns are things we cannot count. A non-count noun is considered to be singular. news water advice milk bread fruit rain coffee tea clothing sugar rice ###### Choose the verb that agrees with each subject. 1. This coffee looks/look very strong. 2. The band play/plays at the pub on Monday nights. 3. We plans/plan to get married this summer. 4. The staff at the hotel is/are always very friendly. 5. Juan and Ted has/have two cats. 6. The team is/are hoping to win the Stanley Cup. 7. My new pair of shoes have/has gone missing. 8. The water was/were so warm, I jumped right in. 9. The fruit is/are ripe and ready to eat. 10. The crowd go/goes wild when Omar goes on stage. 11. The caves is/are fun to explore. Check your work with the Answer Key at the end of this chapter. #### Grammar Rule These pronouns go with singular verbs: anybody anyone everybody everyone nobody no one somebody someone Example: Everybody is welcome at our party. Everybody is the singular subject and is is the singular verb. Subjects with the following words usually go with singular verbs: any each either every neither none one Example: Each of the apples has a big brown spot. Each is the subject and has is the singular verb. ###### Choose the verb that agrees with each subject. 12. Neither of the fields is/are growing corn this year. 13. Each chapter was/were better than the one before it. 14. None of these socks have/has a match. 15. One of these tickets is/are the winner. 16. The buses get/gets very crowded at around 5:00 p.m. 17. No one is/are sitting next to me on the plane. 18. Somebody have/has left me a phone message. 19. Everybody was/were late for the party. 20. Stamps is/are expensive now. 21. A book of stamps is/are$10.00.
Check your work with the Answer Key at the end of this chapter.
Follow the steps below to write a descriptive paragraph on a place in your life.
1. Think: Brainstorm a big list of places that have played a role in your life. For example, the place could be a home, a town, a gathering place, a mountain, or a forest. Then, choose one place from your list and fill out the Mind Map. Ask your instructor for a copy or open and print one from the link. You will also find a printable version in Appendix 1.
2. Organize: Choose the best ideas from your Mind Map. Put a number next to each idea, to show what you will write about first, second, third, and so on.
3. Write: Follow your outline as you write a first draft of your descriptive paragraph. Don’t worry too much about spelling and grammar. Just get your ideas down in a way that makes sense. At this point, you may want to put your draft aside so you can look at it with fresh eyes later.
4. Edit: Use a different colour to make edits to your writing. Check to see how it sounds when you read it out loud. Is the meaning clear? Are there any details that are missing or off topic? Should you use different sentence types to make it flow more smoothly? Are there any words that you want to change to make your writing more alive? (Use a thesaurus to find more interesting vocabulary words.) Are all your sentences complete? Do you need to check the spelling of any words in a dictionary?
5. Rewrite: Write a final copy of your paragraph that includes all your edits. You may wish to type it on a computer. Finally, hand it in to your instructor. |
• 7 CATs FREE!
If you earn 100 Forum Points
Engage in the Beat The GMAT forums to earn
100 points for $49 worth of Veritas practice GMATs FREE VERITAS PRACTICE GMAT EXAMS Earn 10 Points Per Post Earn 10 Points Per Thanks Earn 10 Points Per Upvote ## My 700 & awesome tips to make it happen. You're welcome Find out how Beat The GMAT members tackled GMAT test prep with positive results. Get tips on GMAT test prep materials, online courses, study tips, and more. ##### This topic has expert replies Junior | Next Rank: 30 Posts Posts: 14 Joined: 22 Oct 2011 Thanked: 10 times ### My 700 & awesome tips to make it happen. You're welcome by swipesville » Sat Mar 03, 2012 11:34 pm Be warned this is a very long post, and the last one I will make on this site. I detail an inspiring story of my journey with the GMAT as well as empty out my bag of the best tricks I can offer you, learned in my 4 months of studying. I officially started my GMAT preparation in November of '11. Here is a little background about me I'm a white male 27 years old. I'm originally from Boston but have lived everywhere from Barcelona, to South Beach, to my current residence in Las Vegas Nevada. I have a very friendly and outgoing personality and have held very unconventional jobs such as being a professional online poker player, a personal trainer/model etc. Since I was not carrying a job at the moment I studied for the GMAT around the clock for a month, I viewed it kind of like a poker game always searching for that advantage, that edge. Poker will be a theme throughout this post because I come from a high-stakes poker background and I brought a lot of those transferable skills to my GMAT preparation. I had the Official Guide, the blue and green books (Math/Verbal Review). On my 1st attempt I strolled into my testing center which was hilariously right on the Strip in Las Vegas and crashed out with a mediocre 590. I was not happy. What does a smart person do after a setback? He/she continues to work hard but also examines their approach/strategy. Strategy Tips: Sentence Correction The key to the Verbal Section revolves around Sentence Correction. If you want to have a great Verbal score your sentence correction has to be just lights out. You need to lean off of this section to support you throughout the entire verbal section and try to eliminate the potential for strings of incorrect answers. This is the most important part of the GMAT to make flashcards on. It is the area most largely represented on the Verbal Section and a spot you really need to be looking to actively bank time. On another note, don't be one of those fools who tries to memorize every idiom, that is completely useless and a waste of time. The idioms aren't tested very often and unfortunately you have to kind of feel it out based on the meaning of the sentence. If you do miss an idiom however make sure to make a note of it on your own personal "IDIOMS" flashcard. Here is the optimal way to approach sentence correction. As the Verbal tutorial ticks down (1 minute) as you prepare to begin this section, write ABCDE horizontally on the right side of you pad 4x on each of the 1st four pages you are going to use. Make them neat with slight spacing between the letters and leave space so the 4 aren't clustered on top of each other. It is optimal to write these keys on the right side because your Critical Reasoning and Reading Comprehension notes will naturally flow from left to right as you write, so keeping these sentence correction on the right will help keep you more organized and you won't become flustered, where as you might if you do each question on a random part of the page. The key to success is being calm and organized and your scratch pad strategy should be no different. This move will save you time and keep you organized on the Verbal section which equals points. Once you see an SC read A, you do the standard stuff look for errors. If you see one you snap cross it off and look for that error repeated in other answer choices. You only job when you read choice A is to do one of 3 things. You either cross it off if you're sure it's wrong and can identify a grammatical rule violation that proves it. You put a - (a minus sign) if you don't really like choice A but you can't see anything to eliminate it, and you right a + on top of answer choice A if you like it, and the sentence seems correct and clear. This move really takes the pressure off and allows you to find the worst choices first instead of having to make a strict yes/no judgment call on choice A Then a standard elimination strategy occurs, as you reach tougher SC questions this +/- system increases in value because it allows you to identify whether you kind of liked or didn't like an answer. Trust me in a pressured time situation you will basically instantly the forget the choice you just read and having that helpful +/- reminder can keep you on track as you approach the final couple of choices. Stop doing sentence correction on feel. How committed are you to the GMAT? To the Verbal Section? To Sentence Correction? You can scrape by with knowing some subject verb agreement and avoiding the word "being" right? Wrong answer buddy, you need to build this thing from the ground up. If you can't tell me what the difference is between a gerund and participle right now or when it is correct to use "would + verb root tense", you're just flat out not that serious about sentence correction. I'm giving a shout out right now to fellow Bostonians Dave and Jen who's 4-part SC lesson on Knewton Prep is the best I've seen and really teaches you the underlying grammar structure of what you need to know. So stop just doing a bunch of SCs because even when you do ok and think you're "improving" as your time to completion decreases. I can assure you when test days comes and you use a little too much "feel" on your practice tests when nothing was on the line, you will start breaking down on the real thing under the pressure. You will second guess yourself, you will spend time you can't afford to be spending which will prevent you from crushing. Trust me, I know this from experience. As a final note I was rather furious when I took my test yesterday to find 4 SC's that incorporated the use of dashes in the middle of the sentence, sometimes intertwined with the underlined portion. I had no formal training in how exactly to operate with all these dashed portions and this hurt my score a bit. Kinda feel like writing the GMAC a letter cursing them out, hah. Ok, back to my journey. So after this score I wrote an email describing my experience to my family and a few close friends. Some people supported me, especially two of my closest friends. However my dad wrote me a vicious e-mail screaming at me about getting my life on track. I wonder if he would have wrote that had I came back with a great score? As we say in the poker community, it was a pretty "results oriented" e-mail. After that I envisioned the day I could write a victorious and sarcastic e-mail with a great score and thanking everyone who supported me. So things were looking up, I studied hard night and day for the next month, and I purchased the Verbal Section of the GMAT Pill. The RC was great, the SC was good. The CR was some questions with random diagrams you'd never actually have time to draw on the actual test and just wasn't very good (get POWERSCORE book for CR). When trying to improve at something it's never a bad idea to model/learn from the people who are the best. So I decided to book a coach, her name is Vivian Kerr (Grockit) she's a Cali girl who's a flat out stud at standardized tests so I decided to sign her to a long term deal. Strategy Tips Tutoring: Provided you can afford it, it's never a bad idea to have a superstar in your corner to spot things you can improve and keep you real sharp. While its obviously important to attack your weak areas, its also important to note how a tutor approaches questions stylistically. Sometimes a tutor can help you find the best way for you to approach a question type, but sometimes figuring out what doesn't work for you is just as helpful. I talked to my tutor about specific areas but also strategies of what to study, and how to study most effectively. It was definitely helpful and got me thinking about things the right way. So momentum really seemed to be on my side as I approached the next exam - it was on January 3rd so I even decided to do the unthinkable, take a pass on New Years in Las Vegas, and study that night. O M G My phone was ringing off the hook cuz I'm pretty much an awesome guy to hang out with but I had to bail. If that's not commitment I really don't know what is. So after my second test...I did worse. I got a 580 - devastation ensued. I was just so broken I really felt like I had improved and had absolutely nothing to show for it. When the score came up I literally sat in my share shocked, thinking maybe this was the score for the experimental section I took or something. They proctors literally had to pull me out of the testing room. I had to lie to my friends and family to keep from crying and it was just too embarrassing. A whole month of studying, (that's literally all I did because I'm not carrying a job at the moment) and I had fallen down again. There were kids studying for the GMAT while holding down full time jobs. I couldn't break 600 with all the time in the world. So again, I went back to the drawing board and decided I needed some kind of course/structure to fall back on, and perhaps I had been trying to do too much on my own. I purchased the Knewton Course which I thought was very good and diligently watched the lessons, did homework, and took a lot of notes on flashcards. At this point I was into my job search and had some trading companies interested in my because of my poker background. My friend suggested I take one last crack at the GMAT as a free-roll in case I did well, I could throw it on my resume etc. I had been studying about 3-4 hours a day for the past month which was down from my usual 12. So I said fk it, I'm going to pick myself back off the ground for one last shot. For 12 days I reviewed and noted and practiced, and on March 2 I took another crack at it Strategy Tip: Use adderal occasionally to help you study. I'm certainly not a drug user or anything like that, I drink occasionally and have never even smoked weed, just so you don't get the wrong idea. However adderal is a sweet study aid and if I'm being honest in this review I have to admit that it helped me stay focused during some punishing study sessions. Test Day Recap: I woke up at 8am and had a real chill morning, I flipped thru my flashcards I ate some cheerios and I watched a tennis match on TV. My roommate woke up at 12 like the tool that he is and we went to eat breakfast together at a diner. After this late breakfast I texted a few of my closest friends who had been supporting me through these hard times. Here's a txt convo with me and one of my closest buddies. Brett: Wake up George, it's judgment day George: the bell tolls for thee Brett: Haha, light a candle for me, I'll call you at 8pst after it ends no matter what the score, love you baby, my time to shine George: Fly like an angel Brett, it's too beautiful Haha yeah we're pretty weird. After that I went to my desk and did 3 problems of each type just to warm up a bit. And since I'm being completely honest in this review at this risk of sounding like a lunatic... for 10 minutes I paced around our house yelling things like "Lets fkn ball, I'm bout to wreck shop, lets fkn go" or some combination of rallying phrases to psych myself up. It's was like the 8 mile soundtrack "if you had...one shot" This was my last shot. I left for the test center at 2:45pm for my 4pm start giving myself plenty of time to hang out. Usually it takes 15 minutes to get to this testing center. There was a huge accident on the highway and I was forced to exit and enter the strip on the opposite side of where I needed to be I fought thru immense traffic and arrived at the testing center at 4:10pm. The guy informed me there was a 15 minute grace period and if I was 5 minutes later the appointment would have been cancelled, whew. Then there was some problem where these clowns on MBA.com copied down my birthday as 5-5 instead of 5-10 so they had to call in and get the green light. I neglected to mention earlier that the one positive from my first GMAT attempt was I got 6.0 on both essays which isn't the easiest thing to do. I had decided beforehand to spend 15 minutes on each essay, to get a bit of a warm-up but I certainly wasn't going to expend any energy actually caring about what I wrote. The 1st question was "what has more influence in a nation or a community, a powerful business leader or a government official" My first paragraph was entirely devoted to ripping on the question, claiming that this was the worst GMAT essay question I'd ever seen. "To combine two vastly different entities such as an nation or community into a entity is downright ludicrous and makes this essay question the worst one I've ever laid eyes on." I was laughing at my own essay staying really loose and calm. The second essay involved me berating a business owner and more hilarity ensued. Whoever reads my essays are definitely going to be cracking up. I wrote personal messages at the end too, like "hey have a good weekend, god bless you" just random stuff like that. It was all business on the math section I started off ripping, it was a battle between me and the GMAT. I'm pretty sure I snapped off 9 of the first 10 and after that I could feel the test start to get angrier and angrier that I was doing well. A huge bright spot for me was an area I worked very hard on and that was data sufficiency. One of the pointers I had gotten from my tutor was to spend a bit more time trying to break-down the prompt and not sprinting directly to the statements, this definitely pays a lot of dividends. Strategy Tips Data Sufficiency: Here is my best data-sufficiency tip, you won't find it in an online course, or on a forum, or from your friend who took the GMAT last year, but I'm a big believer in it. Start with the easier/shorter statement! I do this every time. This just makes your life so much easier you get insight into the problem dealing with a much more basic statement. You build momentum and confidence if you can correctly analyze it too. Sometimes if it's sufficient you know the true answer to a value question, and although you obviously don't want to carry-over information. Knowing the real answer will help you look at the harder statement in a more intelligent fashion Furthermore as an aside for you advanced Data Sufficiency doers, if it's a YES/NO question and you start with the easier statement (say A) and the answer is always YES and its sufficient. Most people then look for a YES and a NO in statement B. However the high-stakes pro play is if say you're plugging in numbers and stumble across a NO in statement B don't look for a YES. You're already done the answer is A. If you know for sure that statement A was a YES, if you found a NO on statement B you will find a YES if you keep testing things! The answer to a a DS can't be always YES for one statement and always NO for the other (because the statements are true remember!?) I hope that made sense please re-read it if you don't understand, it's advanced DS a little trick I found out on my own. Timing Strategies: I used 3 check-ins for my quantitative section. I checked in at problems 11,21,and 31. With my time left of 54 minutes, 34 minutes, and 14 minutes. Don't write them at the top of your page like you might hear, because you're scratch pad will always be changing pages obviously. It's important to know where you at and what to do if you're behind. I've often read recaps on this site where people say things like I was horrified to learn that I had 12 minutes left and 10 questions! Listen guys that just can't happen to you, your score is just going to nose-dive at the end with a long series of incorrect answers. You need to know where you're at because if you're using the neat move where you take a long time early on... sure you're in the thick of it with a good score and tough questions, but these tough questions will weigh you down, and the joke is on you, because 30 minutes later at the end of the test, you're going to get hurt. This leads me to a move my friends and I like to call "The 5-second chalk" (chalk meaning like chalking it up [throwing in the towel]) If you get a really hard problem especially in an area where you just aren't that fast or proficient, I recommend the 5-second chalk. You will pick it up 20% of the time by pure luck and 5-6% of the time its an experimental question and even if you were to give it a real shot you may only improve your accuracy to getting it right between 30%-40% of the time. So basically you're only sacrificing 5%-15% The downside is wasting 3.5 minutes on a problem you never really had much of a chance on anyway. Take the 2 minutes, for you gamers out there think of it as a power-up (BAM: +2 minutes!) Don't sit there pretending like you're going to try to figure out a way to solve some super difficult problem, just admit you're beat, fold your hand and exert you energies on problems you have a reasonable shot with. "The second you know your cards can't win throw them in" - Rounders. I had one problem in Geometry that was just so difficult it covered almost the entire screen - 5 second chalk. You also get a double power-up if you can use the 5-second chalk on a super hard problem in an area you aren't that good at. For example say you're god awful at Venn Diagrams and question 24 is a Venn Diagram and looks super hard. You are just shipping EV (yes another poker expression [expected value]) If you chalk this up. The GMAT has content restrictions, meaning you won't ever see like 10 probability questions on a test, so if you see a question in your weak area that's difficult, chances are you won't be seeing this type again if its infrequently tested (ex Venn Diagrams). The 5-second chalk allows you to just own the GMAT super hard. As a final pointer when using the 5-second chalk quickly look for answer choices in pairs and don't choose the odd one out. Furthermore the GMAT know some ppl try to work backwards on tougher ones by plugging in answer choices so they are less likely to make A/B correct. Also they occasionally like to put a real sucker answer choice as choice A hoping you forget the last part of the calculation. This knowledge leads us to favoring the bottom half of the answer choice column. In poker we have a term called "the run good" which is basically the amount of luck an individual has in their life and on a standardized test. Help the run good find you and shoot up a flare: guess intelligently and manage your time wisely and it just might show up to help out, like it did for me. Guessing Strategies: So back to the test, I was rolling thru with a gleam in my eye as the test continued. I ran into a long word problem-ish DS with a few complicated formulas which is definitely a weak spot of mine. I practice what I preach guys I used the 5-second chalk. Obviously on DS you want a strong bias against answer choice E, especially on harder ones. This is because most people think logically and when their giving up and feel like they're outclassed they choose E which is akin to the fold button. We need to be thinking one step ahead of the GMAT. They're banking on us doing what the majority of test takers would do. If they're expecting us to fold we need to call. I would have a slight bias against C as well and usually pick A or B. Ok this is advanced but listen carefully. A GMAT problem is supposed to take you 2 minutes. Therefore if it's a long prompt and two pretty tough statements as in the aforementioned problem, probably one of the statements is going to be a bit easier to get through and the other is going to be tougher because after all if both statements were super hard...is it really solvable in 2 minutes? As a final bias I would choose whichever one looks harder because often times if you were to actually do the problem, the easier statement will pan out as insufficient way and the hard one will be sufficient in a way that's hard for you to see or figure out. This is obviously not always going to be true but it is the percentage play. Ok back to the test I was just in a zone and was basically right on pace effectively guessing and hanging as tough as I could. I finished just on time. As a final pointer on the math section to those of you trying to increase your score. Remember, I moved up from a 580 to a 700 so it was basically a completely different test. When I got the 580 I had racked up extra time and easily was able to finish so I spent it in the middle of the test. If you're looking for a Q49/Q50 you have to keep the big picture in mind. Assume your talent and that the questions are going to get tough, and really try to hold on to the extra time you bank early on in the 1st 10 questions. Once the test realizes you're a stud they're going to try to take the hammer to you near the end and this is where you really need to try to have this time to use. So don't needlessly waste it like I did in the early/middle of the test because double checking a problem you're 90% sure of because that will hurt you later on down the road, those 4-5 minutes you saved up seems like a good chunk of change, but you can go broke quickly..This happened to me, the last 3 problems were really tough and to be honest on question #35 I started breaking down under the time pressure and couldn't think straight on a problem I felt I was supposed to get. I was really wishing I had been a bit more urgent earlier on as I was forced to basically guess on 36 and run a 1/3 on #37. I probably had a Q50 going into the last 3 problems and settled for a pretty sweet Q49. After a break and re-setting the brain to verbal mode, I began. Started off well, Verbal is my stronger section and I've had multiple practice tests in the 80-90% range. Same check-ins, #11, #21 #31 #37 too. 57 minutes, 39 minutes 20 minutes. As your hit question #37 you should have 9 minutes remaining and 5 questions. (think 9-5, 9 to 5) As I mentioned before SC is my lockdown section, followed by RC and I pay the most careful attention and use the most time on CR to try to hang tough. I used to struggle on RC inference questions because I always felt very uncomfortable for what exactly I was supposed to be looking for. Finally after about 3 months I realized it's the exact same thing as a CR inference question. It's just something that must be true. Say it with me "MUST BE TRUE" The same timing issue happened a bit at the end of the verbal section, I had to chalk up a tough bold-faced CR question on 40, which sucks because those are worth a lot of positive points. I had been a little too casual earlier on. As the computer calculated the score I said a prayer like everyone does. I was praying for something in the 600s, I just didn't want to be embarrassed again. Here I was with no job spending all my time studying and I hadn't broken 600. When the screen showed 700 on the dot I knew I was officially in the 700 club. It was 9pm on Friday night, I was the last kid in there and definitely felt some tears welling up thinking about how much I had sacrificed, how hard I had worked and my friends who refused to stop believing in me even when I had almost lost faith in myself. As I side note I've read recaps where people talk about actively shrieking with joy or celebrating in the test room. If you do that you're basically a huge tool and incredibly inconsiderate. I would never celebrate in front of people many of which probably did not do as well, I've been on both sides of the coin (the tears of pain and tears of joy). If you want to go to business school show some class. I'm sorry this post is so absurdly long but I wanted to transfer as much information as possible to future GMAT test takers on this site. This is going to be my last thread on this site but I will certainly respond to any comments if people have any questions because I'm happy to help out. There is a path for everyone to a 700 guys, I wanted to share my story to show you that no matter how broken down you are it is possible to make this thing happen. I hope you have taken something away from my story or from my best tips and tricks that I have laid out here with a poker twist on them. Lastly I wanted to thank my friends, my tutor, some members of my family, basically all the people that kept helping me and believing I could shine. God bless. ~Brett Master | Next Rank: 500 Posts Posts: 136 Joined: 08 Apr 2009 Thanked: 4 times Followed by:1 members by Troika » Sun Mar 04, 2012 5:06 am Brett, that is an awesome debrief; a truly inspirational story of self-belief and discipline. Congrats on the 700. I wish you all the best in your applications. Newbie | Next Rank: 10 Posts Posts: 4 Joined: 29 Jan 2012 Location: Delhi, India by valiullah » Sun Mar 04, 2012 6:15 am Congrts Brett on a job well done! Newbie | Next Rank: 10 Posts Posts: 3 Joined: 05 Feb 2012 by NafisaH » Sun Mar 04, 2012 11:29 am Not to be completely clueless but...when it is correct to use "would + verb root tense?" Is it for the subjunctive mood? Newbie | Next Rank: 10 Posts Posts: 3 Joined: 19 Feb 2012 by zamian256 » Sun Mar 04, 2012 12:17 pm Congrats and thank you for sharing, it was very helpful. Junior | Next Rank: 30 Posts Posts: 14 Joined: 22 Oct 2011 Thanked: 10 times by swipesville » Sun Mar 04, 2012 10:56 pm Thanks guys, wait -- applications..? I thought the whole reason to rip 6.0's on the essays was so you could sub-contract out your applications to company's that specialize in writing them, and the schools can't question them because you lean off your 6.0s?? Hahah, jk. Yeah the would + verb root is basically for a past tense condition with a future tense result "If I studied harder, I would do better." I poured my heart and soul into this post, it was brutally honest too, I didn't leave anything out. This is the directors cut version and it took me two days to write. Hopefully a lot of people read it because I feel like people could learn a lot from some of my tips because they are very specific and non-generic. It will help them avoid some of the mistakes I made along the way to the 700. ~Brett MBA Admissions Consultant Posts: 2278 Joined: 11 Nov 2011 Location: New York Thanked: 660 times Followed by:266 members GMAT Score:770 by Jim@StratusPrep » Mon Mar 05, 2012 11:22 am Great debrief. This will be very helpful to all. GMAT Answers provides a world class adaptive learning platform. -- Push button course navigation to simplify planning -- Daily assignments to fit your exam timeline -- Organized review that is tailored based on your abiility -- 1,000s of unique GMAT questions -- 100s of handwritten 'digital flip books' for OG questions -- 100% Free Trial and less than$20 per month after.
-- Free GMAT Quantitative Review
Newbie | Next Rank: 10 Posts
Posts: 7
Joined: 15 Dec 2011
Thanked: 2 times
by destroyerofgmat » Tue Mar 06, 2012 1:02 pm
Also:
I'm bout to wreck shop.
Noted.
Master | Next Rank: 500 Posts
Posts: 183
Joined: 21 Sep 2011
Location: Washington, DC
Thanked: 6 times
Followed by:2 members
GMAT Score:500
by Rastis » Tue Mar 06, 2012 1:23 pm
Any special study habits you obtained to do well in the quant section? That has always been my weak side and like you, I have not been able to break 600 or even 550 for that matter.
Jesse
Senior | Next Rank: 100 Posts
Posts: 79
Joined: 13 Feb 2012
Thanked: 2 times
Followed by:3 members
by jzw » Tue Mar 06, 2012 4:59 pm
Wow. I read it an can absolutely appreciate what you went through. Enjoy the victory, you deserve it!
Newbie | Next Rank: 10 Posts
Posts: 1
Joined: 09 Feb 2011
Location: Mumbai
by llks » Tue Mar 06, 2012 11:36 pm
Congrats Brett,
~LLKS
"Living life king size"
Newbie | Next Rank: 10 Posts
Posts: 3
Joined: 26 May 2011
by gatorman1122 » Wed Mar 07, 2012 7:54 am
Junior | Next Rank: 30 Posts
Posts: 14
Joined: 22 Oct 2011
Thanked: 10 times
by swipesville » Thu Mar 08, 2012 10:24 am
Sooo +EV there haha. Ok Rastis, the first thing you need revolves around belief and confidence. One thing I did not mention in my long post was that I am far far from a stellar student. I remember failing multiple math courses in college, failing out of school more then once, and struggling to graduate. If I can get a Q49 in math anyone can. So if that's true for me, any self-doubt or lack of belief, or thoughts that it's too tough? You can check all that at the door right now, because if I can do it anyone can. The math section in reality only covers a finite number of concepts, its like learning a second language no one said its easy, but it certainly is possible. Think of the math section in 3 parts, Problem Solving, Data Sufficiency, and Geometry. Start to make a collection of formulas/concepts and I'd recommend using flashcards.
Next you need to combine doing problems (and obviously reviewing your mistakes) with some flat out learning from the ground up. I used the Knewton Course which was very good to help me solidify some concepts. They have a great curriculum that will build your knowledge. Just doing a lot of problems isn't enough you need to have some really solid fundamentals if you want to start to get into the Q40-Q45 range in Quant. So make sure you have a way to increase your knowledge base, you can't do everything on your own - in this digital age be learning stuff from ppl teaching it to you online.
The same way I said you lean off Sentence Correction for Verbal, you lean off number properties for Math, learn as much as you can about this topic. Your data sufficiency has to be very strong as well. My tip for starting with easier statement is very helpful. Geometry is very formula based but actually kind of fun, this will require a lot of flashcards, make this a strong point as well.
Finally, I always think of it like playing poker with the GMAT. You have to be incredibly sharp and aware of what's going on. I would say 1 out of every 3 questions has some trick hidden in the problem somewhere so be very careful and respect every problem. At the beginning don't work timed, for you it seems like the first major step is going to be getting a better grasp of the initial mathematical concepts. Next as that starts to fall into place you're going to need to use some of the timing/guessing strategies I mentioned in my original post. Most test-takers sacrifice 20-50 points because they don't come from a risk/reward background and understand how the test really works. That's a general overview of how to get started but let me know if you any more, more specific questions. Hope that helps, gl man
~Brett
Master | Next Rank: 500 Posts
Posts: 183
Joined: 21 Sep 2011
Location: Washington, DC
Thanked: 6 times
Followed by:2 members
GMAT Score:500
by Rastis » Sun Mar 11, 2012 12:20 pm
Just took a practice test and scored the worst since I first took a practice test last July. With my test slated for next month, needless to say I have no more confidence.
Newbie | Next Rank: 10 Posts
Posts: 2
Joined: 10 Apr 2012
Thanked: 1 times
by jadakiss » Tue Apr 10, 2012 7:10 am
This is the best review that I've read. |
Simple unsolved math problem, 5
It seems everyone’s heard of Pascal’s triangle. However, if you haven’t then it is an infinite triangle of integers with 1‘s down each side and the inside numbers determined by adding the two numbers above it:
First 6 rows of Pascal’s triangle
The first 6 rows are depicted above. It turns out, these entries are the binomial coefficients that appear when you expand $(x+y)^n$ and group the terms into like powers $x^{n-k}y^k$:
First 6 rows of Pascal’s triangle, as binomial coefficients.
The history of Pascal’s triangle pre-dates Pascal, a French mathematician from the 1600s, and was known to scholars in ancient Persia, China, and India.
Starting in the mid-to-late 1970s, British mathematician David Singmaster was known for his research on the mathematics of the Rubik’s cube. However, in the early 1970’s, Singmaster made the following conjecture [1].
Conjecture: If $N(a)$ denotes the number of times the number $a > 1$ appears in Pascal’s triangle then $N(a) \leq 12$ for all $a>1$.
In fact, there are no known numbers $a>1$ with $N(a)>8$ and the only number greater than one with $N(a)=8$ is a=3003.
References:
[1] Singmaster, D. “Research Problems: How often does an integer occur as a binomial coefficient?”, American Mathematical Monthly, 78(1971) 385–386. |
# Fraction of native contacts over a trajectory¶
Here, we calculate the native contacts of a trajectory as a fraction of the native contacts in a given reference.
Last executed: Sep 25, 2020 with MDAnalysis 1.0.0
Last updated: June 29, 2020 with MDAnalysis 1.0.0
Minimum version of MDAnalysis: 1.0.0
Packages required:
[1]:
import MDAnalysis as mda
from MDAnalysis.tests.datafiles import PSF, DCD
from MDAnalysis.analysis import contacts
import nglview as nv
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
The test files we will be working with here feature adenylate kinase (AdK), a phosophotransferase enzyme. ([BDPW09]) The trajectory DCD samples a transition from a closed to an open conformation.
[2]:
u = mda.Universe(PSF, DCD)
## Background¶
Residues can be determined to be in contact if atoms from the two residues are within a certain distance. Native contacts are those contacts that exist within a native state, as opposed to non-native contacts, which are formed along the path to a folded state or during the transition between two conformational states. MDAnalysis defines native contacts as those present in the reference structure (refgroup) given to the analysis.
Proteins often have more than one native state. Calculating the fraction of native contacts within a protein over a simulation can give insight into transitions between states, or into folding and unfolding processes. MDAnalysis supports three metrics for determining contacts:
## Defining the groups for contact analysis¶
For the purposes of this tutorial, we define pseudo-salt bridges as contacts. A more appropriate quantity for studying the transition between two protein conformations may be the contacts formed by alpha-carbon atoms, as this will give us insight into the movements of the protein in terms of the secondary and tertiary structure. The Q1 vs Q2 contact analysis demonstrates an example using the alpha-carbon atoms.
[3]:
sel_basic = "(resname ARG LYS) and (name NH* NZ)"
sel_acidic = "(resname ASP GLU) and (name OE* OD*)"
acidic = u.select_atoms(sel_acidic)
basic = u.select_atoms(sel_basic)
## Hard cutoff with a single reference¶
The 'hard_cut' or hard_cut_q() method uses a hard cutoff for determining native contacts. Two residues are in contact if the distance between them is lower than or equal to the distance in the reference structure.
Below, we use the atomgroups in the universe at the current frame as a reference.
[4]:
ca1 = contacts.Contacts(u,
select=(sel_acidic, sel_basic),
refgroup=(acidic, basic),
method='hard_cut').run()
The results are available as a numpy array at ca1.timeseries. The first column is the frame, and the second is the fraction of contacts present in that frame.
[5]:
ca1_df = pd.DataFrame(ca1.timeseries,
columns=['Frame',
'Contacts from first frame'])
[5]:
Frame Contacts from first frame
0 0.0 1.000000
1 1.0 0.492754
2 2.0 0.449275
3 3.0 0.507246
4 4.0 0.463768
Note that the data is presented as fractions of the native contacts present in the reference configuration. In order to find the number of contacts present, multiply the data with the number of contacts in the reference configuration. Initial contact matrices are saved as pairwise arrays in ca1.initial_contacts.
[6]:
ca1.initial_contacts[0].shape
[6]:
(70, 44)
You can sum this to work out the number of contacts in your reference, and apply that to the fractions of references in your timeseries data.
[7]:
n_ref = ca1.initial_contacts[0].sum()
print('There are {} contacts in the reference.'.format(n_ref))
There are 69 contacts in the reference.
[8]:
n_contacts = ca1.timeseries[:, 1] * n_ref
print(n_contacts[:5])
[69. 34. 31. 35. 32.]
### Plotting¶
You can plot directly from the dataframe, or use other tools such as seaborn. In this trajectory, the fraction of native contats drops immediately to under 50%, and fluctuates around 40% for the rest of the simulation. This means that the protein retains a structure where around 40% salt bridges in the reference remain within the distance of the reference. However, it is difficult to infer information on domain rearrangements and other large-scale movement, other than that the the protein never returns to a similar state as the initial frame.
[9]:
ca1_df.plot(x='Frame')
plt.ylabel('Fraction of contacts')
[9]:
Text(0, 0.5, 'Fraction of contacts')
Another metric that MDAnalysis supports is determining residues to be in contact if they are within a certain radius. This is similar to the hard cutoff metric, in that there is no potential. The difference is that a single radius is used as the cutoff for all contacts, rather than the distance between the residues in the reference. For a tutorial on similar contact analysis of residues within a cutoff, see Number of contacts within cutoff. That tutorial is for calculating the overall number or fraction of contacts, instead of the fraction of native contacts.
You can choose this method by passing in the method name 'radius_cut', which uses the radius_cut_q(). The radius keyword specifies the distance used in ångström. No other arguments need to passed into kwargs.
[10]:
ca2 = contacts.Contacts(u, select=(sel_acidic, sel_basic),
refgroup=(acidic, basic),
### Plotting¶
Again, we can plot over time. We can see that the fraction of native contacts from the first frame has a very different shape for the radius_cut method vs the hard_cut method. While the hard_cut metric tells us that >50% the native contacts never have equal or lower distance during the trajectory, as compared to the reference, the radius_cut analysis shows us that the fraction of contacts within 4.5 Å decreases gradually to 75% over the trajectory. We can infer that almost half the native contacts in the reference frame were closer than 4.5 Å. Moreover, the continuous decrease suggests that the protein may be unfolding, or a large-scale changes in conformation are occurring in such a way that the native salt bridges are not preserved or re-formed.
[11]:
ca2_df = pd.DataFrame(ca2.timeseries,
columns=['Frame', 'Contacts from first frame'])
ca2_df.plot(x='Frame')
plt.ylabel('Fraction of contacts')
[11]:
Text(0, 0.5, 'Fraction of contacts')
## Soft cutoff and multiple references¶
### Multiple references¶
refgroup can either be two contacting groups in a reference configuration, or a list of tuples of two contacting groups.
Below we want to look at native contacts from the first frame, and the last frame. To do this, we create a new universe called ref with the same files (and therefore same data) as u. We need to do this so that the (acidic, basic) selections from u, which are assigned from the first frame, remain unchanged. ref is a different Universe so when we set it to its last frame (with index -1), it does not affect u or the previous selections. Now, when we re-select the atomgroups from ref with the selection string used in the hard-cutoff section, different contacts are selected to the contacts found in the first frame of u.
[12]:
ref = mda.Universe(PSF, DCD)
ref.trajectory[-1]
acidic_2 = ref.select_atoms(sel_acidic)
basic_2 = ref.select_atoms(sel_basic)
### Soft cutoff¶
This time we will use the soft_cut_q algorithm to calculate contacts by setting method='soft_cut'. This method uses the soft potential below to determine if atoms are in contact:
$Q(r, r_0) = \frac{1}{1 + e^{\beta (r - \lambda r_0)}}$
$$r$$ is a distance array and $$r0$$ are the distances in the reference group. $$\beta$$ controls the softness of the switching function and $$\lambda$$ is the tolerance of the reference distance.
Suggested values for $$\lambda$$ is 1.8 for all-atom simulations and 1.5 for coarse-grained simulations. The default value of $$\beta$$ is 5.0. To change these, pass kwargs to contacts.Contacts. We also pass in the contacts from the first frame ((acidic, basic)) and the last frame ((acidic_2, basic_2)) as two separate reference groups. This allows us to calculate the fraction of native contacts in the first frame and the fraction of native contacts in the last frame simultaneously.
[13]:
ca3 = contacts.Contacts(u, select=(sel_acidic, sel_basic),
refgroup=[(acidic, basic), (acidic_2, basic_2)],
method='soft_cut',
kwargs={'beta': 5.0,
'lambda_constant': 1.5}).run()
Again, the first column of the data array in ca2.timeseries is the frame. The next columns of the array are fractions of native contacts with reference to the refgroups passed, in order.
[14]:
ca3_df = pd.DataFrame(ca3.timeseries,
columns=['Frame',
'Contacts from first frame',
'Contacts from last frame'])
[14]:
Frame Contacts from first frame Contacts from last frame
0 0.0 0.999094 0.719242
1 1.0 0.984928 0.767501
2 2.0 0.984544 0.788027
3 3.0 0.970184 0.829219
4 4.0 0.980425 0.833500
### Plotting¶
Again, we can see that the fraction of native contacts from the first frame has a very different shape for the soft_cut method vs the other methods. Like the radius_cut method, a gradual decrease in salt bridges is visible; unlike that plot, however, more than 80% native contacts are counted by 100 frames using this metric. By itself, this analysis might suggest that the protein is unfolding.
More interesting is the fraction of native contacts from the last frame, which rises from ~70% to 100% over the simulation. This rise indicates that the protein is not unfolding, per se (where contacts from the last frame would be expected to rise much less); but instead, a rearrangement of the domains is occurring, where new contacts are formed in the final state.
[15]:
ca3_df.plot(x='Frame')
[15]:
<matplotlib.axes._subplots.AxesSubplot at 0x7ff2543ce280>
Indeed, viewing the trajectory shows us that the enzyme transitions from a closed to open state.
[16]:
u.trajectory[0] # set trajectory to first frame (closed)
# make a new Universe with coordinates of first frame
[17]:
u.trajectory[-1] # set trajectory to last frame (open)
# make a new Universe with coordinates of last frame
We can also plot the fraction of salt bridges from the first frame, over the fraction from the last frame, as a way to characterise the transition of the protein from closed to open.
[18]:
ca3_df.plot(x='Contacts from first frame',
y='Contacts from last frame',
legend=False)
plt.ylabel('Contacts from last frame');
## References¶
[1] Oliver Beckstein, Elizabeth J. Denning, Juan R. Perilla, and Thomas B. Woolf. Zipping and Unzipping of Adenylate Kinase: Atomistic Insights into the Ensemble of Open↔Closed Transitions. Journal of Molecular Biology, 394(1):160–176, November 2009. 00107. URL: https://linkinghub.elsevier.com/retrieve/pii/S0022283609011164, doi:10.1016/j.jmb.2009.09.009.
[2] R. B. Best, G. Hummer, and W. A. Eaton. Native contacts determine protein folding mechanisms in atomistic simulations. Proceedings of the National Academy of Sciences, 110(44):17874–17879, October 2013. 00259. URL: http://www.pnas.org/cgi/doi/10.1073/pnas.1311599110, doi:10.1073/pnas.1311599110.
[3] Joel Franklin, Patrice Koehl, Sebastian Doniach, and Marc Delarue. MinActionPath: maximum likelihood trajectory for large-scale structural transitions in a coarse-grained locally harmonic energy landscape. Nucleic Acids Research, 35(suppl_2):W477–W482, July 2007. 00083. URL: https://academic.oup.com/nar/article-lookup/doi/10.1093/nar/gkm342, doi:10.1093/nar/gkm342.
[4] Richard J. Gowers, Max Linke, Jonathan Barnoud, Tyler J. E. Reddy, Manuel N. Melo, Sean L. Seyler, Jan Domański, David L. Dotson, Sébastien Buchoux, Ian M. Kenney, and Oliver Beckstein. MDAnalysis: A Python Package for the Rapid Analysis of Molecular Dynamics Simulations. Proceedings of the 15th Python in Science Conference, pages 98–105, 2016. 00152. URL: https://conference.scipy.org/proceedings/scipy2016/oliver_beckstein.html, doi:10.25080/Majora-629e541a-00e.
[5] Naveen Michaud-Agrawal, Elizabeth J. Denning, Thomas B. Woolf, and Oliver Beckstein. MDAnalysis: A toolkit for the analysis of molecular dynamics simulations. Journal of Computational Chemistry, 32(10):2319–2327, July 2011. 00778. URL: http://doi.wiley.com/10.1002/jcc.21787, doi:10.1002/jcc.21787. |
# American Institute of Mathematical Sciences
1998, 1998(Special): 327-349. doi: 10.3934/proc.1998.1998.327
## Attractors of nonlinear evolution systems generated by time-dependent subdifferentials in Hilbert spaces
1 Department of Mathematics Graduate School of Science and Technology, Chiba University 1-33 Yayoi-chō, Inage-ku, Chiba, 263, Japan 2 Department of Mathematics, Graduate School of Science and Technology, Chiba University 1-33 Yayoi-chō, Inage-ku, Chiba, 263-8522 3 Department of Mathematics, Faculty of Education, Chiba University, 1-33 Yayoi-chō, Inage-ku, Chiba, 263–8522
Published November 2013
Citation: Akio Ito, Noriaki Yamazaki, Nobuyuki Kenmochi. Attractors of nonlinear evolution systems generated by time-dependent subdifferentials in Hilbert spaces. Conference Publications, 1998, 1998 (Special) : 327-349. doi: 10.3934/proc.1998.1998.327
[1] Risei Kano, Yusuke Murase. Solvability of nonlinear evolution equations generated by subdifferentials and perturbations. Discrete & Continuous Dynamical Systems - S, 2014, 7 (1) : 75-93. doi: 10.3934/dcdss.2014.7.75 [2] Fengjuan Meng, Meihua Yang, Chengkui Zhong. Attractors for wave equations with nonlinear damping on time-dependent space. Discrete & Continuous Dynamical Systems - B, 2016, 21 (1) : 205-225. doi: 10.3934/dcdsb.2016.21.205 [3] Goro Akagi, Mitsuharu Ôtani. Evolution equations and subdifferentials in Banach spaces. Conference Publications, 2003, 2003 (Special) : 11-20. doi: 10.3934/proc.2003.2003.11 [4] G. Dal Maso, Antonio DeSimone, M. G. Mora, M. Morini. Time-dependent systems of generalized Young measures. Networks & Heterogeneous Media, 2007, 2 (1) : 1-36. doi: 10.3934/nhm.2007.2.1 [5] A. Damlamian, Nobuyuki Kenmochi. Evolution equations generated by subdifferentials in the dual space of $(H^1(\Omega))$. Discrete & Continuous Dynamical Systems - A, 1999, 5 (2) : 269-278. doi: 10.3934/dcds.1999.5.269 [6] Nikolaos S. Papageorgiou, Vicenţiu D. Rădulescu. Periodic solutions for time-dependent subdifferential evolution inclusions. Evolution Equations & Control Theory, 2017, 6 (2) : 277-297. doi: 10.3934/eect.2017015 [7] Nicola Guglielmi, László Hatvani. On small oscillations of mechanical systems with time-dependent kinetic and potential energy. Discrete & Continuous Dynamical Systems - A, 2008, 20 (4) : 911-926. doi: 10.3934/dcds.2008.20.911 [8] Božzidar Jovanović. Symmetries of line bundles and Noether theorem for time-dependent nonholonomic systems. Journal of Geometric Mechanics, 2018, 10 (2) : 173-187. doi: 10.3934/jgm.2018006 [9] Tomás Caraballo, Marta Herrera-Cobos, Pedro Marín-Rubio. Robustness of time-dependent attractors in H1-norm for nonlocal problems. Discrete & Continuous Dynamical Systems - B, 2018, 23 (3) : 1011-1036. doi: 10.3934/dcdsb.2018140 [10] P. Cerejeiras, U. Kähler, M. M. Rodrigues, N. Vieira. Hodge type decomposition in variable exponent spaces for the time-dependent operators: the Schrödinger case. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2253-2272. doi: 10.3934/cpaa.2014.13.2253 [11] Zhiqing Liu, Zhong Bo Fang. Blow-up phenomena for a nonlocal quasilinear parabolic equation with time-dependent coefficients under nonlinear boundary flux. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3619-3635. doi: 10.3934/dcdsb.2016113 [12] Juan C. Jara, Felipe Rivero. Asymptotic behaviour for prey-predator systems and logistic equations with unbounded time-dependent coefficients. Discrete & Continuous Dynamical Systems - A, 2014, 34 (10) : 4127-4137. doi: 10.3934/dcds.2014.34.4127 [13] Robert E. Miller. Homogenization of time-dependent systems with Kelvin-Voigt damping by two-scale convergence. Discrete & Continuous Dynamical Systems - A, 1995, 1 (4) : 485-502. doi: 10.3934/dcds.1995.1.485 [14] Takeshi Fukao, Masahiro Kubo. Time-dependent obstacle problem in thermohydraulics. Conference Publications, 2009, 2009 (Special) : 240-249. doi: 10.3934/proc.2009.2009.240 [15] Giuseppe Maria Coclite, Mauro Garavello, Laura V. Spinolo. Optimal strategies for a time-dependent harvesting problem. Discrete & Continuous Dynamical Systems - S, 2018, 11 (5) : 865-900. doi: 10.3934/dcdss.2018053 [16] Francesco Di Plinio, Gregory S. Duane, Roger Temam. Time-dependent attractor for the Oscillon equation. Discrete & Continuous Dynamical Systems - A, 2011, 29 (1) : 141-167. doi: 10.3934/dcds.2011.29.141 [17] Jin Takahashi, Eiji Yanagida. Time-dependent singularities in the heat equation. Communications on Pure & Applied Analysis, 2015, 14 (3) : 969-979. doi: 10.3934/cpaa.2015.14.969 [18] Pengyu Chen, Yongxiang Li, Xuping Zhang. On the initial value problem of fractional stochastic evolution equations in Hilbert spaces. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1817-1840. doi: 10.3934/cpaa.2015.14.1817 [19] Alfredo Lorenzi, Gianluca Mola. Identification of a real constant in linear evolution equations in Hilbert spaces. Inverse Problems & Imaging, 2011, 5 (3) : 695-714. doi: 10.3934/ipi.2011.5.695 [20] Jin-Mun Jeong, Seong-Ho Cho. Identification problems of retarded differential systems in Hilbert spaces. Evolution Equations & Control Theory, 2017, 6 (1) : 77-91. doi: 10.3934/eect.2017005
Impact Factor: |
# On the Trick for Computing the Squared Euclidean Distances Between Two Sets of Vectors
Many times one wants to compute the squared pairwise Euclidean distances between two sets of observations. As always it is enlightening to look at the computation being done in the single case, between a vector, $$x$$, and a vector, $$y$$, $$||x-y||^2$$. The computation for the distance can be rewritten into a simpler form,
\begin{align}||x-y||^2 &= (x_1-y_1)^2 + (x_2-y_2)^2 + \ldots + (x_n-y_n)^2 \\ &= x_1^2+y_1^2-2x_1y_1 + \ldots + x_n^2+y_n^2-2x_ny_n \\&= x \cdot x + y \cdot y – 2x \cdot y\end{align}.
This means that the squared distance between the vectors can be written as the sum of the dot product of $$x$$and$$y$$with themselves minus two times the dot product between$$x$$and$$y$$.
How can we generalize this into an expression involving two sets of observations? If we let the observations in the first set be rows in a matrix$$X$$of size$$N \times M$$, and the second set be rows in a matrix,$$Y$$, of size$$K \times M$$, then the distance matrix,$$D$$, will be$$N \times K$$.
The value of the entry in the$$i$$-th row and$$j$$-th column of$$D$$, is the distance between the$$i$$-th row vector in$$X$$and$$j$$-th vector in$$Y$$. That is, rows in$$D$$refers to observations in$$X$$and columns to observations in$$Y$$.
This means that the$$i$$-th row of the expression that generalizes the dot product of$$x$$should be a matrix where the$$i$$-th row consists of copies of the dot product between the$$i$$-th vector in$$X$$, with itself. In Matlab, there is several ways of writing this,
repmat(diag(X*X'),1,K),
or better,
repmat(sum(X.^2,2),1,K).
However, since Matlab’s repmat function is slow (at least used to be) we can write the duplication as a matrix multiplication with a one-dimensional row vector of ones,
sum(X.^2,2) * ones(1,K).
We do the same for the values in$$Y$$, except this time we want the copies of the dot products to be column-wise. This gives the following expression
ones(N,1) * sum ( Y.^2, 1 )'
The final matrix is the one that generalizes the dot product between$$x$$and$$y$$. This is simply given as,$$X*Y’$$. Putting it all together we get,
D = sum(X.^2,2)*ones(1,K) + ones(N,1)*sum( Y.^2, 2 )' - 2.*X*Y'
Nowadays you are probably better of using Matlabs fast compiled version, pdist2; which is about 200% faster when the number of vectors are large.
### Distances using Eigen
If we want to implement this in Eigen, a C++ library for doing linear algebra much in the same manner as in Matlab, we can do it in the following way,
// Construct two simple matrices
Eigen::MatrixXd X(3,4);
X << 1, 2, 3, 4
4, 5, 6, 4
7, 8, 9, 4;
Eigen::MatrixXd Y(4,4);
Y << 3, 2, 4, 20
4, 5, 5, 4
1, 5, 10, 4
3, 11, 0, 6;
const int N = X.rows();
const int K = Y.rows();
// Allocate parts of the expression
Eigen::Matrix<float, Eigen::Dynamic, Eigen::Dynamic> XX, YY, XY;
XX.resize(N,1);
YY.resize(1,K);
XY.resize(N,K);
D.resize(N,K);
// Compute norms
XX = X.array().square().rowwise().sum();
YY = Y.array().square().rowwise().sum().transpose();
XY = (2*X)*Y.transpose();
// Compute final expression
D = XX * Eigen::MatrixXf::Ones(1,N);
D = D + Eigen::MatrixXf::Ones(N,1) * YY;
D = D - XY;
// For loop comparison
Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic> D2;
D2.resize(N,K);
for( int i_row = 0; i_row != N; i_row++ )
for( int j_col = 0; j_col != K; j_col++ )
D2(i_row,j_col) = XX(i_row) + YY(j_col) - 2*XY(i_row,j_col);
std::cout << D << std::endl;
std::cout << D2 << std::endl;
[/code]</pre>
<p>If we just want to do compute the distance between one point and all other points we can make use of some of Eigen's nice chaining functionality.</p>
<pre>
// Subtract row of Y from every row in X
X.rowwise() -= Y.row(i).transpose();
// Compute row wise squared norm
Eigen::VectorXf d = X.rowwise().squaredNorm();
## 10 thoughts on “On the Trick for Computing the Squared Euclidean Distances Between Two Sets of Vectors”
1. good article.
it should however say “XY = 2*X*Y.transpose()”.
i wonder how many bad copy-paste releases are out there by now ;).
1. Martin says:
Fixed. Thanks!
2. Marc says:
I’m not sure if you’ve further changed this post, but your code as is does not work. For one, the matrices are not 3×3. Also I imagine N and K show be .rows(), no?
1. Martin says:
Fixed.
3. JS says:
Line 23:
YY = X.array().square().rowwise().sum().transpose();
should be:
YY = Y.array().square().rowwise().sum().transpose();
And let me point out that the elements in D can end up to be very small negative values by the numerical error. If you ever think about taking the square root of D to get the distance, this can produce NaN’s.
1. Martin says:
True there are many pitfalls that must be taken care of to make the code production-ready. This is just to illustrate the trick. Fixed the typo. Thanks!
4. Gurki says:
still great, just noticed another typo though:
YY = Y.array().square().rowwise().sum().transpose();
1. Martin says:
Squashed thanks!
5. Wayne Cochran says:
if X and Y are 3×3 matrices, why are they initialized with 12 and 16 values respectively?
1. Martin says: |
# Checking to see whether a document can be deleted
My code looks like this:
public void checkIfCanDelete() throws BusinessException {
boolean canDelete = canDelete();
if (canDelete) {
checkIfLocked();
if(!editable) {
}
return;
}
if (newDoc) {
}
if (getSetting().isExcludedFromDeploy()) {
}
checkIfLocked();
// Default Message:
}
In my case if canDelete is true, then I have to call checkIFLocked and throw exception if not editable.
Here I think that I have a duplicate code witch is throwing the same exception, or I did called the same method twice in the same method block, which is checkIfLocked.
So is there any way to enhance this code block? Do I need to call the same method twice in the same method block?
-
@QuakeCore what do you mean by reliance/reliance – Salah Nov 12 '15 at 9:12
I assume that checkIfLocked() returns void, and throws a BusinessException if the document is locked.
There is one way to succeed, and many ways to fail. I think it would be beneficial to rearrange the code to make it obvious what the criteria for success are:
public void checkIfCanDelete() throws BusinessException {
if (editable && canDelete()) {
checkIfLocked();
return; // Can delete
}
// Can't delete. We just have to choose a reason for the denial.
if (canDelete()) {
assert !editable;
} else if (newDoc) {
} else if (getSetting().isExcludedFromDeploy()) {
}
checkIfLocked();
}
The code above preserves the same logic as the original. However, if you're not picky about which reason you pick for the denial, you could simplify the code further:
public void checkIfCanDelete() throws BusinessException {
checkIfLocked();
if (!(editable && canDelete())) {
newDoc ?
getSetting().isExcludedFromDeploy() ?
);
}
}
-
I agree, with Malachi by moving the variable to the if statement, since its not being used anywhere else.
If they are both performing a checkIfLocked() on the file, wouldn't it be better to perform this first, then continue one with the other checks?
public void checkIfCanDelete() throws BusinessException {
checkIfLocked();
if (canDelete()) {
if(!editable) {
}
return;
}
if (newDoc) {
} else if (getSetting().isExcludedFromDeploy()) {
}
// Default Message:
}
-
Updated the code by remove the 2 elses – Brian Apr 15 '14 at 15:21
right here:
boolean canDelete = canDelete();
if (canDelete) {
checkIfLocked();
if(!editable) {
}
return;
}
you can remove the Boolean variable and code it like this
if (canDelete()) {
checkIfLocked();
if(!editable) {
}
return;
}
if the return value of the function is a boolean you should be able to just call the function inside the expression of the if statement.
that is the only thing that I could see.
I don't know what checkIfLocked() does, it would be nice to see that as well.
I don't know what all checkIfLocked() and canDelete() do, but I would imagine that you should just have checkIfLocked() inside of your canDelete() function and eliminate the call to it from this checkIfCanDelete() Method.
If it's locked you can't delete it right?
- |
# Homework 6
Please answer the following questions in complete sentences in a typed manuscript and submit the solution on blackboard by on April 11th at noon. (These will be back before the midterm.)
## Problem 0: Homework checklist
• Please identify anyone, whether or not they are in the class, with whom you discussed your homework. This problem is worth 1 point, but on a multiplicative scale.
• Make sure you have included your source-code and prepared your solution according to the most recent Piazza note on homework submissions.
## Problem 1: Gautschi Exercise 4.1
The following sequences converge to $0$ as $n\to\infty$:
• $v_n = n^{-10}$
• $w_n = 10^{-n}$
• $x_n = 10^{-n^2}$
• $y_n = n^{10} 3^{-n}$
• $z_n = 10^{-3 \cdot 2^n}$
Indicate the type of convergence for each sequence in terms of
• Sublinear
• Linear
• Superlinear
1. Using any method we've seen to solve a scalar nonlinear equation (bisection, false position, secant), develop a routine to compute $\sqrt{x}$ using only addition, subtraction, multiplication, and division (and basic control structures) to numerical precision. (Use double-precision.)
2. Compare the results of your method to the Matlab/Julia/Python function sqrt. Comment on any differences that surprise you.
Consider the problem $f(x) = (1/2) x - \sin x = 0$. The only positive real root is located in $[ 1/2 \pi, \pi ]$. Compare the performance of bisection, false position, and the secant method in terms of the number of function evaluations to compute the solution to $7$ and full machine precision. For all these methods, use the boundary points $[a,b] = [ 1/2 \pi, \pi ]$ (or use those as the first two points for secant). |
# Iceberg
What is the surface area of 50 cm iceberg (in the shape of a cuboid) that can carry a man with luggage with a total weight of 120 kg?
Correct result:
S = 3 m2
#### Solution:
$h=50 \ cm \rightarrow m=50 / 100 \ m=0.5 \ m \ \\ m=120 \ \text{kg} \ \\ \ \\ ρ_{1}=1000 \ \text{kg/m}^3 \ \\ ρ_{2}=920 \ \text{kg/m}^3 \ \\ \ \\ ρ=ρ_{1}-ρ_{2}=1000-920=80 \ \text{kg/m}^3 \ \\ \ \\ m=ρ V \ \\ \ \\ V=m/ρ=120/80=\dfrac{ 3 }{ 2 }=1.5 \ \text{m}^3 \ \\ V=Sh \ \\ \ \\ S=V/h=1.5/0.5=3 \ \text{m}^2$
Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you!
Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):
Be the first to comment!
Tips to related online calculators
Tip: Our volume units converter will help you with the conversion of volume units.
Tip: Our Density units converter will help you with the conversion of density units.
#### You need to know the following knowledge to solve this word math problem:
We encourage you to watch this tutorial video on this math problem:
## Next similar math problems:
• Wooden box
The block-shaped box was placed on the ground, leaving a rectangular print with dimensions of 3 m and 2 m. When flipped over to another wall, a print with dimensions of 0.5 m and 3 m remained in the sand. What is the volume of the wooden box?
• Stone
When Peter threw stone in a box of water he discovered that the water level has risen by 6 cm. The box has a cuboid shape, the bottom has dimensions of 24 cm and 14 cm, height is 40 cm. What volume has a stone?
• Railway wagon
The railway wagon holds 75 m3 load. Wagon can carry a maximum weight of 30 tonnes. What is the maximum density that may have material with which we could fill this whole wagon? b) what amount of peat (density 350 kg/m3) can carry 15 wagons?
• Iron cast
What is the weight of cast iron with volume 3575 cubic centimeters. The density of the cast iron is 7600 kg/m3
• Square prism
Calculate the volume of a square prism of high 2 dm wherein the base is: rectangle with sides 17 cm and 1.3 dm
• Triangular prism
Calculate the volume of a triangular prism 10 cm high, the base of which is an equilateral triangle with dimensions a = 5 cm and height va = 4,3 cm
• Wall painting
The wall is 4 meters wide and 2 meters high. Window in the wall has dimensions 2x1,8 meters. How many litera of color is needed to paint two-layer this walls, if the 1 m2 needs 1 liters of paint?
• Rainfall
On Thursday, fell 1 cm rainfall. How many liters of water fell to rectangular garden with dimensions of 22 m x 35 m?
• Rainwater
The garden area of 800 square meters fell 3mm of rainwater. How many 10 liters of water can we water this garden equally?
• Concrete pillar
How many m³ of concrete is needed for the construction of the pillar shape of a regular tetrahedral prism, when a = 60 cm and the height of the pillar is 2 meters?
• Aquarium
I have an aquarium that is 100 cm long and 40 cm wide and 40 cm in height. We fill it with water. How much will it weigh?
• Wood prisms
How many weight 25 prisms with dimensions 8x8x200 cm? 1 cubic meter of wood weighs 800 kg.
• The cuboid
The cuboid is filled to the brim with water. The external dimensions are 95 cm, 120 cm, and 60 cm. The thickness of all walls and the bottom is 5 cm. How many liters of water fit into the cuboid?
• Gold horseshoe
Calculate the volume of a gold horseshoe that weighs 750g.
• Metal pyramid
Find the weight of a regular quadrilateral pyramid with a 5 cm length and 6,5 cm body height is made from material with density g/cm3.
• Solid in water
The solid weighs in air 11.8 g and in water 10 g. Calculate the density of the solid. |
## Projective Texture
In order to create our flashlight effect, we need to do something called projective texturing. Projective texturing is a special form of texture mapping. It is a way of generating texture coordinates for a texture, such that it appears that the texture is being projected onto a scene, in much the same way that a film projector projects light. Therefore, we need to do two things: implement projective texturing, and then use the value we sample from the projected texture as the light intensity.
The key to understanding projected texturing is to think backwards, compared to the visual effect we are trying to achieve. We want to take a 2D texture and make it look like it is projected onto the scene. To do this, we therefore do the opposite: we project the scene onto the 2D texture. We want to take the vertex positions of every object in the scene and project them into the space of the texture.
Since this is a perspective projection operation, and it involves transforming vertex positions, naturally we need a matrix. This is math we already know: we have vertex positions in model space. We transform them to a camera space, one that is different from the one we use to view the scene. Then we use a perspective projection matrix to transform them to clip-space; both the matrix and this clip-space are again different spaces from what we use to render the scene. One perspective divide later, and we're done.
That last part is the small stumbling block. See, after the perspective divide, the visible world, the part of the world that is projected onto the texture, lives in a [-1, 1] sized cube. That is the size of NDC space, though it is again a different NDC space from the one we use to render. The problem is that the range of the texture coordinates, the space of the 2D texture itself, is [0, 1].
This is why we needed the prior discussion of post-projective transforms. Because we need to do a post-projective transform here: we have to transform the XY coordinates of the projected position from [-1, 1] to [0, 1] space. And again, we do not want to have to perform the perspective divide ourselves; OpenGL has special functions for texture accesses with a divide. Therefore, we encode the translation and scale as a post-projective transformation. As previously demonstrated, this is mathematically identical to doing the transform after the division.
This entire process represents a new kind of light. We have seen directional lights, which are represented by a light intensity coming from a single direction. And we have seen point lights, which are represented by a position in the world which casts light in all directions. What we are defining now is typically called a spotlight: a light that has a position, direction, and oftentimes a few other fields that limit the size and nature of the spot effect. Spotlights cast light on a cone-shaped area.
We implement spotlights via projected textures in the Projected Light project. This tutorial uses a similar scene to the one before, though with slightly different numbers for lighting. The main difference, scene wise, is the addition of a textured background box.
The camera controls work the same way as before. The projected flashlight, represented by the red, green, and blue axes, is moved with the IJKL keyboard keys, with O and U moving up and down, respectively. The right mouse button rotates the flashlight around; the blue line points in the direction of the light. The flashlight's position and orientation are built around the camera controls, so it rotates around a point in front of the flashlight. It translates relative to its current facing as well. As usual, holding down the Shift key will cause the flashlight to move more slowly.
Pressing the G key will toggle all of the regular lighting on and off. This makes it easier to see just the light from our projected texture.
### Flashing the Light
Let us first look at how we achieve the projected texture effect. We want to take the model space positions of the vertices and project them onto the texture. However, there is one minor problem: the scene graph system provides a transform from model space into the visible camera space. We need a transform to our special projected texture camera space, which has a different position and orientation.
We resolve this by being clever. We already have positions in the viewing camera space. So we simply start there and construct a matrix from view camera space into our texture camera space.
Example 17.6. View Camera to Projected Texture Transform
glutil::MatrixStack lightProjStack;
//Texture-space transform
lightProjStack.Translate(0.5f, 0.5f, 0.0f);
lightProjStack.Scale(0.5f, 0.5f, 1.0f);
//Project. Z-range is irrelevant.
lightProjStack.Perspective(g_lightFOVs[g_currFOVIndex], 1.0f, 1.0f, 100.0f);
//Transform from main camera space to light camera space.
lightProjStack.ApplyMatrix(lightView);
lightProjStack.ApplyMatrix(glm::inverse(cameraMatrix));
g_lightProjMatBinder.SetValue(lightProjStack.Top());
Reading the modifications to lightProjStack in bottom-to-top order, we begin by using the inverse of the view camera matrix. This transforms all of our vertex positions back to world space, since the view camera matrix is a world-to-camera matrix. We then apply the world-to-texture-camera matrix. This is followed by a projection matrix, which uses an aspect ratio of 1.0. The last two transforms move us from [-1, 1] NDC space to the [0, 1] texture space.
The zNear and zFar for the projection matrix are entirely irrelevant. They need to be legal values for your perspective matrix (strictly greater than 0, and zFar must be larger than zNear), but the values themselves are meaningless. We will discard the Z coordinate entirely later on.
We use a matrix uniform binder to associate that transformation matrix with all of the objects in the scene. This is all we need to do to set up the projection, as far as the matrix math is concerned.
Our vertex shader (projLight.vert) takes care of things in the obvious way:
lightProjPosition = cameraToLightProjMatrix * vec4(cameraSpacePosition, 1.0);
Note that this line is part of the vertex shader; lightProjPosition is passed to the fragment shader. One might think that the projection would work best in the fragment shader, but doing it per-vertex is actually just fine. The only time one would need to do the projection per-fragment would be if one was using imposters or was otherwise modifying the depth of the fragment. Indeed, because it works so well with a simple per-vertex matrix transform, projected textures were once a preferred way of doing cheap lighting in many situations.
In the fragment shader, projLight.frag, we want to use the projected texture as a light. We have the ComputeLighting function in this shader from prior tutorials. All we need to do is make our projected light appear to be a regular light.
PerLight currLight;
currLight.cameraSpaceLightPos = vec4(cameraSpaceProjLightPos, 1.0);
currLight.lightIntensity =
textureProj(lightProjTex, lightProjPosition.xyw) * 4.0;
currLight.lightIntensity = lightProjPosition.w > 0 ?
currLight.lightIntensity : vec4(0.0);
We create a simple structure that we fill in. Later, we pass this structure to ComputeLighting, and it does the usual thing.
The view camera space position of the projected light is passed in as a uniform. It is necessary for our flashlight to properly obey attenuation, as well as to find the direction towards the light.
The next line is where we do the actual texture projection. The textureProj is a texture accessing function that does projective texturing. Even though lightProjTex is a sampler2D (for 2D textures), the texture coordinate has three dimensions. All forms of textureProj take one extra texture coordinate compared to the regular texture function. This extra texture coordinate is divided into the previous one before being used to access the texture. Thus, it performs the perspective divide for us.
### Note
Mathematically, there is virtually no difference between using textureProj and doing the divide ourselves and calling texture with the results. While there may not be a mathematical difference, there very well may be a performance difference. There may be specialized hardware that does the division much faster than the general-purpose opcodes in the shader. Then again, there may not. However, using textureProj will certainly be no slower than texture in the general case, so it's still a good idea.
Notice that the value pulled from the texture is scaled by 4.0. This is done because the color values stored in the texture are clamped to the [0, 1] range. To bring it up to our high dynamic range, we need to scale the intensity appropriately.
The texture being projected is bound to a known texture unit globally; the scene graph already associates the projective shader with that texture unit. So there is no need to do any special work in the scene graph to make objects use the texture.
The last statement is special. It compares the W component of the interpolated position against zero, and sets the light intensity to zero if the W component is less than or equal to 0. What is the purpose of this?
It stops this from happening:
The projection math doesn't care what side of the center of projection an object is on; it will work either way. And since we do not actually do clipping on our texture projection, we need some way to prevent back projection from happening. We effectively need to do some form of clipping.
Recall that, given the standard perspective transform, the W component is the negation of the camera-space Z. Since the camera in our camera space is looking down the negative Z axis, all positions that are in front of the camera must have a W > 0. Therefore, if W is less than or equal to 0, then the position is behind the camera.
### Spotlight Tricks
The size of the flashlight can be changed simply by modifying the field of view in the texture projection matrix. Pressing the Y key will increase the FOV, and pressing the N key will decrease it. An increase to the FOV means that the light is projected over a greater area. At a large FOV, we effectively have an entire hemisphere of light.
Another interesting trick we can play is to have multi-colored lights. Press the 2; this will change to a texture that contains spots of various different colors.
This kind of complex light emitter would not be possible without using a texture. Well it could be possible without textures, but it would require a lot more processing power than a few matrix multiplies, a division in the fragment shader, and a texture access. Press the 1 key to go back to the flashlight texture.
There is one final issue that can and will crop up with projected textures: what happens when the texture coordinates are outside of the [0, 1] boundary. With previous textures, we used either GL_CLAMP_TO_EDGE or GL_REPEAT for the S and T texture coordinate wrap modes. Repeat is obviously not a good idea here; thus far, our sampler objects have been clamping to the texture's edge. That worked fine because our edge texels have all been zero. To see what happens when they are not, press the 3 key.
That rather ruins the effect. Fortunately, OpenGL does provide a way to resolve this. It gives us a way to say that texels fetched outside of the [0, 1] range should return a particular color. As before, this is set up with the sampler object:
Example 17.7. Border Clamp Sampler Objects
glSamplerParameteri(g_samplers[1], GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
glSamplerParameteri(g_samplers[1], GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
float color[4] = {0.0f, 0.0f, 0.0f, 1.0f};
glSamplerParameterfv(g_samplers[1], GL_TEXTURE_BORDER_COLOR, color);
The S and T wrap modes are set to GL_CLAMP_TO_BORDER. Then the border's color is set to zero. To toggle between the edge clamping sampler and the border clamping one, press the H key.
That's much better now.
Fork me on GitHub |
## The Perturbative Pole Mass in QCD
by Andreas S. Kronfeld.
Fermilab report FERMILAB-PUB-98/139-T
SPIRES entry
E-print archive hep-ph/9805215
### Some Background
Many physicists are probably astonished that a proof of the infrared finiteness and gauge independence of the pole mass in QCD is being written up in 1998. If you are one of them, please read the following before assuming that the results have long been known.
This paper grew out of Referee A's report on another paper of mine, hep-lat/9712024, written with Bart Mertens and Aida El-Khadra and submitted to (and published in) Physical Review D. The referee wrote
2) The authors assume that the pole mass of a quark is a well-defined concept order by order in perturbation theory. To the best of my knowledge this has not been shown in the literature. It has been demonstrated that the pole mass is infrared finite and gauge invariant to order $\alpha^2$~[C], and the corresponding finite part in the relation to the $\overline{\rm MS}$ mass has been worked out in ref.~[D].
It is quite possible that infrared problems prevent a definition of the quark's pole mass to all orders in perturbation theory....
[C] R. Tarrach, Nucl. Phys. B183 (1981) 384.
[D] N. Gray, D.J. Broadhurst, W. Grafe and K. Schilcher, Z. Phys. C 48, 673 (1990).
Let me add that other parts of the report revealed that Referee A is exceptionally well-informed on theoretical issues. It turns out, he/she also knows the literature better than most of us.
After considerable literature search I could not find a proof anywhere. During my search I found numerous authors who assert that the pole mass is, indeed, well-defined order by order in perturbation theory. Many papers cite Tarrach's paper for an all-orders proof, even though it sticks to two loops. Indeed, Tarrach is openly worried about the infrared. He writes [italics mine]
It may be evident to many theorists that the pole-mass is gauge-parameter independent in perturbative QCD, but it is less evident whether it is IR finite or not. Let us study these issues at the two loop level.
When Tarrach did his work, in 1981, there had been an effort to uncover a confining mechanism in the infrared divergences of QCD, so his concerns are a sign of the times.
In trying to trace the history of the QCD pole mass, I've noticed two folklores, which have evolved side-by-side. One, espoused by Referee A, holds that infrared divergences in QCD are so serious that nothing can be taken for granted. The other, which is nowadays probably more popular, takes for granted that the pole mass is infrared finite. (I have found no citation to a paper, even one on QED, that purports to study the problem to all orders; a remark in a footnote shows that Noboru Nakanishi knew what to do [Prog. Theor. Phys. 19 (1958) 159].)
I realize that some of you will have known the QED literature well enough to see that the generalization to QCD was straightforward. I would be happy to acknowledge unpublished work on the subject here: feel free to send me a copy of your notes. (Of course, it goes without saying that I would like to know of a detailed published reference.) At the same time, I hope that my paper serves as a useful reference, underpinning the (now publicly proven) fact that the pole mass in QCD is well defined at every order in perturbation theory.
During the time this paper was circulated as an e-print, several physicists from around the world alerted me to proofs of gauge independence of the pole mass, in QED and QCD, and of analogous quantities such as gluon damping rates at nonzero temperature. By and large, these papers do not pay close attention to infrared divergences. An exception is in Lowell Brown's text, Quantum Field Theory, which contains an elegant proof that infrared divergences and gauge dependence of the electron propagator (in QED) resides in the residue only, not the pole position. The proof is relegated to a problem and is, thus, easy to overlook. The proof assumes an Abelian gauge group, and I have not tried to generalize it.
Finally, I would also like to thank Referee A; without his/her strict report, I would not have tried to prove something that so many experts'' thought was done in 1981.
01 May 1998 --- Andreas Kronfeld ask@fnal.gov
Modified 29 July 1998 |
## Section: Scientific Foundations
### Spatial approximation for solving ODEs
Participants : Philippe Chartier, Erwan Faou.
The technique consists in solving an approximate initial value problem on an approximate invariant manifold for which an atlas consisting of easily computable charts exists. The numerical solution obtained is this way never drifts off the exact manifold considerably even for long-time integration.
Instead of solving the initial Cauchy problem, the technique consists in solving an approximate initial value problem of the form:
$\begin{array}{ccc}\hfill {\stackrel{˜}{y}}^{\text{'}}\left(t\right)& =& \stackrel{˜}{f}\left(\stackrel{˜}{y}\left(t\right)\right),\hfill \\ \hfill \stackrel{˜}{y}\left(0\right)& =& {\stackrel{˜}{y}}_{0},\hfill \end{array}$ (13)
on an invariant manifold $\stackrel{˜}{ℳ}=\left\{y\in {ℝ}^{n};\stackrel{˜}{g}\left(y\right)=0\right\}$, where $\stackrel{˜}{f}$ and $\stackrel{˜}{g}$ approximate $f$ and $g$ in a sense that remains to be defined. The idea behind this approximation is to replace the differential manifold $ℳ$ by a suitable approximation $\stackrel{˜}{ℳ}$ for which an atlas consisting of easily computable charts exists. If this is the case, one can reformulate the vector field $\stackrel{˜}{f}$ on each domain of the atlas in an easy way. The main obstacle of parametrization methods [56] or of Lie-methods [53] is then overcome.
The numerical solution obtained is this way obviously does not lie on the exact manifold: it lives on the approximate manifold $\stackrel{˜}{ℳ}$. Nevertheless, it never drifts off the exact manifold considerably, if $ℳ$ and $\stackrel{˜}{ℳ}$ are chosen appropriately close to each other.
An obvious prerequisite for this idea to make sense is the existence of a neighborhood $𝒱$ of $ℳ$ containing the approximate manifold $\stackrel{˜}{ℳ}$ and on which the vector field $f$ is well-defined. In contrast, if this assumption is fulfilled, then it is possible to construct a new admissible vector field $\stackrel{˜}{f}$ given $\stackrel{˜}{g}$. By admissible, we mean tangent to the manifold $\stackrel{˜}{ℳ}$, i.e. such that
$\begin{array}{c}\hfill \forall \phantom{\rule{0.166667em}{0ex}}y\phantom{\rule{0.166667em}{0ex}}\in \phantom{\rule{0.166667em}{0ex}}\stackrel{˜}{ℳ},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\stackrel{˜}{G}\left(y\right)\stackrel{˜}{f}\left(y\right)=0,\end{array}$
where, for convenience, we have denoted $\stackrel{˜}{G}\left(y\right)={\stackrel{˜}{g}}^{\text{'}}\left(y\right)$. For any $y\in \stackrel{˜}{ℳ}$, we can indeed define
$\begin{array}{c}\hfill \stackrel{˜}{f}\left(y\right)=\left(I-P\left(y\right)\right)f\left(y\right),\end{array}$ (14)
where $P\left(y\right)={\stackrel{˜}{G}}^{T}\left(y\right){\left(\stackrel{˜}{G}\left(y\right){\stackrel{˜}{G}}^{T}\left(y\right)\right)}^{-1}\stackrel{˜}{G}\left(y\right)$ is the projection along $\stackrel{˜}{ℳ}$. |
Chapter 5: Systems of Equations
# 5.2 Substitution Solutions
While solving a system by graphing has advantages, it also has several limitations. First, it requires the graph to be perfectly drawn: if the lines are not straight, it may result in the wrong answer. Second, graphing is challenging if the values are really large—over 100, for example—or if the answer is a decimal that the graph will not be able to depict accurately, like 3.2134. For these reasons, graphing is rarely used to solve systems of equations. Commonly, algebraic approaches such as substitution are used instead.
Example 5.2.1
Find the intersection of the equations $2x - 3y = 7$ and $y = 3x - 7.$
Since $y = 3x - 7,$ substitute $3x-7$ for the $y$ in $2x - 3y = 7.$
The result of this looks like:
$2x - 3(3x - 7) = 7$
Now solve for the variable $x$:
$\begin{array}{rrrrrrr} 2x&-&9x&+&21&=&7 \\ &&&-&21&&-21 \\ \hline &&&&\dfrac{-7x}{-7}&=&\dfrac{-14}{-7} \\ \\ &&&&x&=&2 \end{array}$
Once the $x$-coordinate is known, the $y$-coordinate is easily found.
To find $y,$ use the equations $y = 3x - 7$ and $x = 2$:
$\begin{array}{l} y = 3(2) - 7 \\ \phantom{y}= 6 - 7 \\ \phantom{y}=-1 \end{array}$
These lines intersect at $x = 2$ and $y = -1$, or at the coordinate $(2, -1).$
This means the intersection is both consistent and independent.
Example 5.2.2
Find the intersection of the equations $y + 4 = 3x$ and $2y - 6x = -8.$
To solve this using substitution, $y$ or $x$ must be isolated. The first equation is the easiest in which to isolate a variable:
$\begin{array}{rrrrrrr} y&+&4&=&3x&& \\ &-&4&&-4&& \\ \hline &&y&=&3x&-&4 \end{array}$
Substituting this value for $y$ into the second equation yields:
$\begin{array}{rrrrrrr} 2(3x&-&4)&-&6x&=&-8 \\ 6x&-&8&-&6x&=&-8 \\ &+&8&&&&+8 \\ \hline &&&&0&=&0 \end{array}$
The equations are identical, and when they are combined, they completely cancel out. This is an example of a consistent and dependent set of equations that has many solutions.
Example 5.2.3
Find the intersection of the equations $6x - 3y = -9$ and $-2x + y = 5.$
The second equation looks to be the easiest in which to isolate a variable, so:
$\begin{array}{rrrrrrr} -2x&+&y&=&5&& \\ +2x&&&&+2x&& \\ \hline &&y&=&2x&+&5 \end{array}$
Substituting this into the first equation yields:
$\begin{array}{rrcrrrr} 6x&-&3(2x&+&5)&=&-9 \\ 6x&-&6x&-&15&=&-9 \\ &&&&-15&=&-9 \end{array}$
The variables cancel out, resulting in an untrue statement. These are parallel lines that have identical variables but different intercepts. There is no solution, and these are inconsistent equations.
# Questions
For questions 1 to 20, solve each system of equations by substitution.
1. $\left\{ \begin{array}{rrrrr} y&=&-3x&& \\ y&=&6x&-&9 \end{array}\right.$
2. $\left\{ \begin{array}{rrrrr} y&=&x&+&5 \\ y&=&-2x&-&4 \end{array}\right.$
3. $\left\{ \begin{array}{rrrrr} y&=&-2x&-&9 \\ y&=&2x&-&1 \end{array}\right.$
4. $\left\{ \begin{array}{rrrrr} y&=&-6x&+&3 \\ y&=&6x&+&3 \end{array}\right.$
5. $\left\{ \begin{array}{rrrrr} y&=&6x&+&4 \\ y&=&-3x&-&5 \end{array}\right.$
6. $\left\{ \begin{array}{rrrrr} y&=&3x&+&13 \\ y&=&-2x&-&22 \end{array}\right.$
7. $\left\{ \begin{array}{rrrrr} y&=&3x&+&2 \\ y&=&-3x&+&8 \end{array}\right.$
8. $\left\{ \begin{array}{rrrrr} y&=&-2x&-&9 \\ y&=&-5x&-&21 \end{array}\right.$
9. $\left\{ \begin{array}{rrrrr} y&=&2x&-&3 \\ y&=&-2x&+&9 \end{array}\right.$
10. $\left\{ \begin{array}{rrrrr} y&=&7x&-&24 \\ y&=&-3x&+&16 \end{array}\right.$
11. $\left\{ \begin{array}{rrrrrrr} &&y&=&3x&-&4 \\ 3x&-&3y&=&-6&& \end{array}\right.$
12. $\left\{ \begin{array}{rrrrrrr} -x&+&3y&=&12&& \\ &&y&=&6x&+&21 \end{array}\right.$
13. $\left\{ \begin{array}{rrrrrrr} &&y&=&-6&& \\ 3x&-&6y&=&30&& \end{array}\right.$
14. $\left\{ \begin{array}{rrrrrrr} 6x&-&4y&=&-8&& \\ &&y&=&-6x&+&2 \end{array}\right.$
15. $\left\{ \begin{array}{rrrrrrr} &&y&=&-5&& \\ 3x&+&4y&=&-17&& \end{array}\right.$
16. $\left\{ \begin{array}{rrrrrrr} 7x&+&2y&=&-7&& \\ &&y&=&5x&+&5 \end{array}\right.$
17. $\left\{ \begin{array}{rrrrr} -6x&+&6y&=&-12 \\ 8x&-&3y&=&16 \end{array}\right.$
18. $\left\{ \begin{array}{rrrrr} -8x&+&2y&=&-6 \\ -2x&+&3y&=&11 \end{array}\right.$
19. $\left\{ \begin{array}{rrrrr} 2x&+&3y&=&16 \\ -7x&-&y&=&20 \end{array}\right.$
20. $\left\{ \begin{array}{rrrrr} -x&-&4y&=&-14 \\ -6x&+&8y&=&12 \end{array}\right.$ |
category theory
# Contents
## Statement
###### Theorem
Let $E$ be the class of final functors and $M$ be the class of discrete fibrations. Then $(E,M)$ is an orthogonal factorization system of Cat, called the comprehensive factorization system.
###### Proof
Let $F:C\to D$ be a functor. Define $K:D\to Set$ as the left Kan extension of the constant presheaf $C\to Set$ at the singleton along $F$. Explicitly, $K(d)$ is the set of connected components of $F/d$. Let $E=\int K$, so an object of $E$ is an ordered pair $(d, [\alpha:Fc\to d])$ where $[\alpha]$ denotes the connected component of $(c,\alpha)$. Then it is not hard to verify that $e:C\to E$ mapping $c\mapsto (Fc,[id_{fc}])$ is final, the canonical $m:E\to D$ is a discrete fibration, and $F=me$.
Now we show that $E$ and $M$ are replete subcategories of $Cat$. Clearly they include all isomorphisms.
If functors $F:C\to D$ and $G: D\to E$ are final, then we show that $G\circ F$ is final. For $e\in E$, there is a element $(d,\alpha:e\to Gd)$ of $e/G$, and thence an element $(c,\beta:d\to Fc)$ of $d/F$, so we obtain an element $(c, e \stackrel{\alpha}{\to} Gd \stackrel{G\beta}{\to} GFc)$ of $e/GF$. Now we must show that any two elements $(c,\gamma:e\to GFc),(c',\gamma':e\to GFc')$ are connected. Since $G$ is final, elements $(Fc,\gamma)$ and $(Fc',\gamma')$ of $e/G$ are connected. It suffices to consider the case of a zig-zag of length one: a morphism $f:Fc\to Fc'$ such that
By finality of $F$, the elements $(c,id:Fc\to Fc)$ and $(c', f:Fc\to Fc')$ of $Fc/F$ are connected. A zig-zag path between them, by precomposition with $\gamma$, becomes a zig-zag path between $(c,\gamma)$ and $(c',\gamma')$. So $G\circ F$ is final.
The proof that discrete fibrations form a subcategory is omitted.
Now we must show that the lifting problem
has a unique solution $h$ when $e\in E$ and $m\in M$.
We prove uniqueness first. For $b\in B$, let $(a,\alpha:b\to e(a))\in b/e$. Then $h(\alpha)$ must be the unique lifting of $g(\alpha)$, and $h(b)$ the domain of this lifting, proving uniqueness of $h$ on objects. For $\beta:b\to b'$ in $B$, $h(\beta)$ must be the unique lifting of $g(\beta)$, so $h$ is unique (if it exists).
Now we must show that this $h$ is well-defined, functorial, and a solution to the lifting problem. If $(a',\alpha':b\to e(a'))$ is another element of $b/e$, then WLOG let $u:a\to a'$ such that
Lifting this diagram, we see that $g(\alpha)$ and $g(\alpha')$ must lift to morphisms with identical domain, so $h$ is well-defined on objects.
For $\beta:b\to b'$ in $B$, let $\alpha:b'\in e(a)$, and by the diagram
we see that $g(\beta)$ and $g(\alpha\circ\beta)$ must lift to morphisms with identical domains, so $h(\beta)$ has domain $h(b)$.
Functoriality now follows easily from uniqueness of lifting for a discrete fibration, and it is not hard to show that $h$ is a solution to the lifting problem. |
# Dice¶
The diceplan creator needs to be running dicestatus in a crontask or some regular frequency. This finishes any unfinished bets and also creates entropy tx.
Additionally, you need to create txids with hashed entropy, basically any dice tx other than a dicebet will add hashed entropy, but you need to create a few at first via diceaddfunds.
The diceinfo, dicelist, diceaddress work just like the rewards counterparts.
Once there is a dice plan with funds, you can make dicebets. For now to resolve it, the creator of the diceplan needs to do a dicewinner or diceloser.
Lastly. there will be a dicerefund RPC that will allow anybody to undo a dicebet, this would happen only if the diceplan node is offline. It could be that it refunds or it becomes an automatic win.
In order to save a step, the entropy of the dicebet is not hashed. But I guess I need to, if we want it to refund after timeout instead of automatic win, as the way it is now would allow the house account to just not complete a large losing bet.
The dicefund creator can actually finalize any dicebet transactions and it should properly deal with paying winners and not paying losing tx.
In the event a house account tries to cheat by not dicefinish for winning bets, ie. trying to not payout. When the dice plan’s expiration happens, it is treated as a win and anybody can complete the dicefinish.
When the dicefinish completes a bet for either win or loss, it attaches the original entropy value so the hash of it can be verified. In other words it is provably fair and random for each and every bet.The dicebet is the one that chooses what house entropy to use, so that alone gives the power to determine the outcome to the dicebettor. And as long as the bettor’s entropy is a high entropy value, the outcome is totally random.
Technically I generate 2 256 bit numbers from the two entropy values. I just SHA256(house entropy + bettor entropy) for the house and SHA256(bettor entropy + house entropy) for the bettor. Then for odds of > 1, the entropy value is divided by the odds and the two numbers compared. The bettor value adjusted by odds needs to be bigger than the house value.
I think the payout matches the risk. for a 1:1, the two values are directly compared and a win is 2x what was bet. |
A binary relation, R, over C is a set of ordered pairs made up from the elements of C. A symmetric relation … Example3: (a) The relation ⊆ of a set of inclusion is a partial ordering or any collection of sets … The relation is reflexive, symmetric, antisymmetric, and transitive. Reflexive : - A relation R is said to be reflexive if it is related to itself only. The relation is irreflexive and antisymmetric. Co-reflexive: A relation ~ (similar to) is co-reflexive … The relations we are interested in here are binary relations … Consider the empty relation on a non-empty set, for instance. Or the relation $<$ on the reals. REFLEXIVE RELATION:IRREFLEXIVE RELATION, ANTISYMMETRIC RELATION Elementary Mathematics Formal Sciences Mathematics $\endgroup$ – Andreas Caranti Nov 16 '18 at 16:57 9) Let R be a relation on R = {(1, 1), (1, 2), (2, 1)}, then R is A) Reflexive B) Transitive C) Symmetric D) antisymmetric Let * be a binary operations on R defined by a * b = a + b 2 Determine if * is associative and commutative. For each of these binary relations, determine whether they are reflexive, symmetric, antisymmetric, transitive. Anti-reflexive: If the elements of a set do not relate to itself, then it is irreflexive or anti-reflexive. 6.3. Matrices for reflexive, symmetric and antisymmetric relations. Give reasons for your answers and state whether or not they form order relations or equivalence relations. partial order relation, if and only if, R is reflexive, antisymmetric, and transitive. Let us consider a set A = {1, 2, 3} R = { (1,1) ( 2, 2) (3, 3) } Is an example of reflexive. Assume A={1,2,3,4} NE a11 a12 a13 a14 a21 a22 a23 a24 a31 a32 a33 a34 a41 a42 a43 a44 SW. R is reflexive iff all the diagonal elements (a11, a22, a33, a44) are 1. Instead of using two rows of vertices in the digraph that represents a relation on a set $$A$$, we can use just one set of vertices to … A relation $\mathcal R$ on a set $X$ is * reflexive if $(a,a) \in \mathcal R$, for each $a \in X$. The relation $$S$$ is antisymmetric since the reverse of every non-reflexive ordered pair is not an element of $$S.$$ However, $$S$$ is not asymmetric as there are some $$1\text{s}$$ along the main diagonal. Reflexive Relation Characteristics. reflexive relation irreflexive relation symmetric relation antisymmetric relation transitive relation Contents Certain important types of binary relation can be characterized by properties they have. Thus, the relation being reflexive, antisymmetric and transitive, the relation 'divides' is a partial order relation. $\begingroup$ An antisymmetric relation need not be reflexive. Quasi-reflexive: If each element that is related to some element is also related to itself, such that relation ~ on a set A is stated formally: ∀ a, b ∈ A: a ~ b ⇒ (a ~ a ∧ b ~ b). Let's say you have a set C = { 1, 2, 3, 4 }. A matrix for the relation R on a set A will be a square matrix. The set A together with a partial ordering R is called a partially ordered set or poset. Here we are going to learn some of those properties binary relations may have. Of binary relation can be characterized by properties they have $on the reals relation on a set =. Reflexive relation irreflexive relation symmetric relation antisymmetric relation transitive relation Contents Certain important types of binary can! Binary relation can be characterized by properties they have characterized by properties they have relations we going! Here we are going to learn some of those properties binary relations may have it is or. Relations we are interested in here are binary relations may have of a set not... The elements of a set do not relate to itself, then is... Relate to itself, then it is irreflexive or anti-reflexive relate to itself only 's you! Reflexive, symmetric, antisymmetric, and transitive may have are going to learn some of those properties binary may... Give reasons for your answers and state whether or not they form order relations equivalence. The reals is reflexive, antisymmetric, transitive give reasons for your and! Said to be reflexive if it is related to itself, then it is irreflexive or anti-reflexive with partial! Ordered set or poset ordered set or poset, determine whether they are,... And transitive transitive relation Contents Certain important types of binary relation can be by. - a relation R is called a partially ordered set or poset, is. Reflexive if it is related to itself only the relations we are interested in here are relations... They form order relations or equivalence relations relation transitive relation Contents Certain important types of binary relation can be by!, 4 } order relation, if and only if, R is reflexive,,! The set a together with a partial ordering R is called a partially ordered set or poset determine they. Matrix for the relation R on a set do not relate to itself then! Types of binary relation can be characterized by properties they have … reflexive irreflexive! Only if, R is called a partially ordered set or poset determine whether they are reflexive, symmetric antisymmetric. Of a set do not relate to itself, then it is irreflexive or anti-reflexive { 1 2., determine whether they are reflexive, antisymmetric, transitive, for instance a relation R is said be. Or not they form order relations or equivalence relations antisymmetric, transitive they have a partially ordered set or.. Your answers and state whether or not they form order relations or equivalence relations not they form relations... They are reflexive, symmetric, antisymmetric, and transitive is related to itself only going! Set do not relate to itself, then it is irreflexive or anti-reflexive set C = 1. Relation, if and only if, R is said to be reflexive if is... Set, for instance the elements of a set C = { 1, 2 3! Set or poset let 's say you have a set C = { 1,,. Symmetric, antisymmetric, and transitive, 3, 4 } ordered or! These binary relations, determine whether they are reflexive, symmetric,,. It is irreflexive or anti-reflexive antisymmetric, and transitive relation Contents Certain important of. If it is irreflexive or anti-reflexive$ < $on the reals square matrix relation$ < $on reals! Contents Certain important types of binary relation can be characterized by properties they have R is said be. Relation on a non-empty set, for instance, 4 } antisymmetric, transitive you., if and only if, R is reflexive, symmetric, antisymmetric, transitive... Reflexive if it is related to itself only are reflexive, antisymmetric, and transitive if, R is,. Binary relation can be characterized by properties they have for each of these binary relations determine! A non-empty set, for instance = { 1, 2, 3, 4 } we! Irreflexive relation symmetric relation antisymmetric relation transitive relation Contents Certain important types of binary relation be! Relation antisymmetric relation transitive relation Contents Certain important types of binary relation can be characterized by properties they.. Not relate to itself, then it is related to itself, then it is related itself! Equivalence relations and transitive of those properties binary relations, determine whether they are reflexive, symmetric, antisymmetric and... Interested in here are binary relations … reflexive relation irreflexive relation symmetric relation antisymmetric relation relation... Relations we are going to learn some of is antisymmetric relation reflexive properties binary relations may have is. 1, 2, 3, 4 } binary relations … reflexive relation Characteristics, 3 4. Or poset and only if, R is said to be reflexive if it is related to itself.... Relation can be characterized by properties they have, and transitive properties is antisymmetric relation reflexive relations, determine whether are... To learn some of those properties binary relations … reflexive relation Characteristics we interested... Partial ordering R is said to be reflexive if it is irreflexive or anti-reflexive on set. By properties they have Contents Certain important types of binary relation can be characterized by properties they have the of. We are going to learn some of those properties binary relations may.... Non-Empty set, for instance be reflexive if it is irreflexive or anti-reflexive, antisymmetric, transitive partially set! Antisymmetric, and transitive relations, determine whether they are reflexive, antisymmetric transitive! Here are binary relations may have a non-empty set, for instance set poset... 'S say you have a set a will be a square matrix elements a. They are reflexive, antisymmetric, and transitive <$ on the reals a ordered... Square matrix, 3, 4 }, if and only if, R is said to reflexive... Are binary relations … reflexive relation irreflexive relation symmetric relation antisymmetric relation transitive relation Certain... Or anti-reflexive, and transitive for instance whether or not they form order relations or equivalence relations symmetric! Said to be reflexive if it is related to itself, then it is irreflexive or anti-reflexive set or.. Be a square matrix state whether or not they form order relations or relations. The reals then it is related to itself only relation Characteristics reflexive relation Characteristics to learn some those. For your answers and state whether or not they form order relations or relations. Binary relations may have it is irreflexive or anti-reflexive ordered set or.. Partially ordered set or poset relations … reflexive relation Characteristics symmetric, antisymmetric, and.! A together with a partial ordering R is reflexive, symmetric, antisymmetric, and transitive can be characterized properties. Ordered set or poset learn some of those properties binary relations, whether... Relation antisymmetric relation transitive relation Contents Certain important types of binary relation can be characterized by properties they have R... Relation is reflexive, antisymmetric, and transitive the elements of a a. Contents Certain important types of binary relation can be characterized by properties they have 1, 2,,. Be reflexive if it is related to itself, then it is irreflexive or anti-reflexive they have for.. To learn some of those properties binary relations, determine whether they are reflexive symmetric... Are going to learn some of those properties binary relations may have, instance! For instance will be a square matrix relations … reflexive relation irreflexive relation relation... For the relation R is called a partially ordered set or poset, symmetric, antisymmetric, transitive! Partial order relation, if and only if, R is said to reflexive! They have elements of a set do not relate to itself, then it is irreflexive or.... < \$ on the reals reflexive, symmetric, antisymmetric, and transitive the elements a! Your answers and state whether or not they form order relations or equivalence relations important types binary... Relate to itself only do not relate to itself only 's say you a! You have a set do not relate to itself only your answers and whether... Order relation, if and only if, R is said to be reflexive it. Relation can be characterized by properties they have reflexive relation irreflexive relation symmetric relation antisymmetric relation transitive relation Certain. Said to be reflexive if it is irreflexive or anti-reflexive a partial ordering R is reflexive, antisymmetric,.... By properties they have not relate to itself only matrix for the relation <. Partially ordered set or poset set, for instance it is related to is antisymmetric relation reflexive then. In here are binary relations, determine whether they are reflexive, antisymmetric transitive... Whether they are reflexive, symmetric, antisymmetric, transitive relation transitive relation Contents Certain important types of relation... Ordered set or poset be characterized by properties they have do not relate to itself, then it is to. Transitive relation Contents Certain important types of binary relation can be characterized by properties they have { 1 2... Each of these binary relations, determine whether they are reflexive, symmetric, antisymmetric, transitive is to. Of binary relation can be characterized by is antisymmetric relation reflexive they have or not they form order or... Interested in here are binary relations … reflexive relation Characteristics itself only state whether not! To itself, then it is related to itself, then it is to. Elements of a set C = { 1, 2, 3 4! … reflexive relation irreflexive relation symmetric relation antisymmetric relation transitive relation Contents Certain important of! Going to learn some of those properties binary relations … reflexive relation Characteristics they order!: if the elements of a set C = { 1,,. |
# wx.lib.agw.rulerctrl¶
RulerCtrl implements a ruler window that can be placed on top, bottom, left or right to any wxPython widget.
## Description¶
RulerCtrl implements a ruler window that can be placed on top, bottom, left or right to any wxPython widget. It is somewhat similar to the rulers you can find in text editors software, though not so powerful.
RulerCtrl has the following characteristics:
• Can be horizontal or vertical;
• 4 built-in formats: integer, real, time and linearDB formats;
• Units (as cm, dB, inches) can be displayed together with the label values;
• Possibility to add a number of “paragraph indicators”, small arrows that point at the current indicator position;
• Customizable background colour, tick colour, label colour;
• Possibility to flip the ruler (i.e. changing the tick alignment);
• Changing individually the indicator colour (requires PIL at the moment);
• Different window borders are supported (wx.STATIC_BORDER, wx.SUNKEN_BORDER, wx.DOUBLE_BORDER, wx.NO_BORDER, wx.RAISED_BORDER, wx.SIMPLE_BORDER);
• Logarithmic scale available;
• Possibility to draw a thin line over a selected window when moving an indicator, which emulates the text editors software.
And a lot more. See the demo for a review of the functionalities.
## Usage¶
Usage example:
import wx
import wx.lib.agw.rulerctrl as RC
class MyFrame(wx.Frame):
def __init__(self, parent):
wx.Frame.__init__(self, parent, -1, "RulerCtrl Demo")
panel = wx.Panel(self)
text = wx.TextCtrl(panel, -1, "Hello World! wxPython rules", style=wx.TE_MULTILINE)
ruler1 = RC.RulerCtrl(panel, -1, orient=wx.HORIZONTAL, style=wx.SUNKEN_BORDER)
ruler2 = RC.RulerCtrl(panel, -1, orient=wx.VERTICAL, style=wx.SUNKEN_BORDER)
mainsizer = wx.BoxSizer(wx.HORIZONTAL)
leftsizer = wx.BoxSizer(wx.VERTICAL)
bottomleftsizer = wx.BoxSizer(wx.HORIZONTAL)
topsizer = wx.BoxSizer(wx.HORIZONTAL)
panel.SetSizer(mainsizer)
# our normal wxApp-derived class, as usual
app = wx.App(0)
frame = MyFrame(None)
app.SetTopWindow(frame)
frame.Show()
app.MainLoop()
## Events¶
RulerCtrl implements the following events related to indicators:
• EVT_INDICATOR_CHANGING: the user is about to change the position of one indicator;
• EVT_INDICATOR_CHANGED: the user has changed the position of one indicator.
## Supported Platforms¶
RulerCtrl has been tested on the following platforms:
• Windows (Windows XP);
• Linux Ubuntu (Dapper 6.06)
## Window Styles¶
No particular window styles are available for this class.
## Events Processing¶
This class processes the following events:
Event Name
Description
EVT_INDICATOR_CHANGED
The user has changed the indicator value.
EVT_INDICATOR_CHANGING
The user is about to change the indicator value.
RulerCtrl is distributed under the wxPython license.
Latest Revision: Andrea Gavana @ 19 Dec 2012, 21.00 GMT
Version 0.4
## Functions Summary¶
ConvertPILToWX Converts a PIL image into a wx.Image. ConvertWXToPIL Converts a wx.Image into a PIL image. GetIndicatorBitmap Returns the image indicator as a wx.Bitmap. GetIndicatorData Returns the image indicator as a decompressed stream of characters. GetIndicatorImage Returns the image indicator as a wx.Image. MakePalette Creates a palette to be applied on an image based on input colour.
## Classes Summary¶
Indicator This class holds all the information about a single indicator inside RulerCtrl. Label Auxilary class. Just holds information about a label in RulerCtrl. RulerCtrl RulerCtrl implements a ruler window that can be placed on top, bottom, left or right RulerCtrlEvent Represent details of the events that the RulerCtrl object sends.
### Functions¶
ConvertPILToWX(pil, alpha=True)
Converts a PIL image into a wx.Image.
Parameters
• pil – a PIL image;
• alphaTrue if the image contains alpha transparency, False otherwise.
Note
ConvertWXToPIL(bmp)
Converts a wx.Image into a PIL image.
Parameters
bmp – an instance of wx.Image.
Note
GetIndicatorBitmap()
Returns the image indicator as a wx.Bitmap.
GetIndicatorData()
Returns the image indicator as a decompressed stream of characters.
GetIndicatorImage()
Returns the image indicator as a wx.Image.
MakePalette(tr, tg, tb)
Creates a palette to be applied on an image based on input colour.
Parameters
• tr – the red intensity of the input colour;
• tg – the green intensity of the input colour;
• tb – the blue intensity of the input colour. |
# Find equations of the tangent lines at points
• Sep 26th 2009, 07:56 PM
s3a
Find equations of the tangent lines at points
Given y = 3 + 4x^2 - 2x^3, how do I find the slope of the tangent line at point (1,5), for example, using the Newton Quotient Law or something in the Derivatives Chapter I'm studying. I "cheated" and estimated the slope by plugging in x = 1.01 and then finding the slope of between the points (1,5) and (1.01, 5.019798) but I'd like to be able to solve this the proper (and required) way. My work is attached.
Any help would be greatly appreciated!
• Sep 26th 2009, 07:59 PM
differentiate
$y = 3 + 4x^2 - 2x^3$
$y' = 8x - 6x^2$
$when\ x=1\ m= 8(1)-6(1)=2$
• Sep 26th 2009, 08:10 PM
s3a
How did you go from 3 + 4x^2 - 2x^3 to 8x - 6x^2 ?
• Sep 26th 2009, 08:41 PM
differentiate
You differentiate it.
• Sep 26th 2009, 09:28 PM
s3a
Could you please be a little more precise because I am very knew to this?
• Sep 26th 2009, 10:07 PM
differentiate
differentiation. the rule is:
If $y = x^n$ then y' = $nx^{n-1}$ so if $y= x^2$ then the differentiated form will be $2x$
The derivative of any constant. e.g. 9, 8, 7, 100, is zero.
The derivative of an equation of the form $y = ax^n$ is $y' = anx^{n-1}$
so for this equation 3 + 4x^2 - 2x^3
the derivative of 3 is zero
the derivatie of $4x^2$ is $8x$
the derivative of $-2x^3$ is $-6x^2$
Put it all together and the derivative is:
$8x - 6x^2$
• Sep 27th 2009, 07:09 AM
s3a
Ok thanks, I get how to do the differentiation thing now but why am I doing that? Like what I am finding? (in not so mathematical terms please)
• Sep 27th 2009, 10:50 PM
differentiate
You're finding the gradient function. This is the function that allows you to find the gradient. Once you have the gradient function, any value substitituted in this, will determine the gradient of a certain value at the point substituted.
e.g. $y = x^2$
dy/dx = 2x
substitute x =1 into dy/dx.
this is equal to 2.
therefore, the gradient when x =1, is m =2
so basically, when you differentiate something, you are given another equation known as the GRADIENT FUNCTION. This is an equation specifically targeted at finding gradients. |
# The Particle at the End of the Universe
### Sean Carroll, 2012, Hillsboro 539.721
Mostly about the development of the Standard Model, mathless. There is a large "bandgap" between word salad physics descriptions and the full-mathematical graduate-level description, and very little in between. This is among the best word-salad books, but I hope to find one which at least includes some algebra, and better technical illustrations of the geometry. Many more drawings explaining symmetries would help.
Page 110 LHC, bunches collide 20 MHz. Hundreds of millions of collisions per second, up to 100 or more particles per collision, one megabyte per collision, "1000 1-terabyte hard drives per second"
• This compares to the data rate of scanning a stream of launch loop rotor bolts to 10 micrometer precision. This data would be compared to the prior history of the bolts, repeating every 8 minutes. Identifying "interesting" "evolving" defects that increase the probability of future failure for a bolt, then replacing that bolt in the stream, is a computation and pattern recognition problem on a similar scale to LHC particle detection. Launch loop will exploit many technological advances created by the genius researchers at LHC.
Page 119 "science consultant for big-budget Hollywood movie", planet shaped like a disk. COuldn't find reference with websearch.
Page 208 Particle Fever, David Kaplan, Walter Murch, Sundance 2013.
Page 219 More and Different: Notes from a Thoughtful Curmudgeon by Phil Anderson.
• book flap: "at press time, he was involved in several scientific controversies about high profile subjects, about which his point of view, though unpopular at the moment, is likely to prevail eventually"
# The Big Picture
## 2016, Beaverton, 576.83 CAR
This is philosophy, not science as such. Trying to turn science into "emotional meaning" is difficult; "is" is not "ought". I checked it out hoping to learn about why cosmologists chose the models they do, and why those models are absurdly oversimplified; perhaps the message is that they choose oversimplified models so the can write emotional philosophies like this book.
My own view is that nature in the miniature is rule-driven, but those rules have extremely complex and baroque, with surprising non-obvious outcomes. Nature at maximum scope is far more complex, but difficult and expensive to observe, so there is still a wilderness for the philosophers to hypothesize the existence of islands of philosophical orderliness. Spherical cows have been banished from the Earth and from our particle colliders, but they still orbit at the extremes of space and time.
Or, I'm rectocrainially inserted, and I should devote more hours to reading books like this. That would leave less time to read books with math in them.
The appendix has some math, the equation for the standard model "action", page 437:
W = { \Huge \int _{\large {k<\Lambda} } } { \large { [Dg][DA][D\psi][D\Phi] } } ~ exp \Bigg\{ ~ i ~ { \Huge \int } d^4 x \sqrt{-g} \Bigg[ { { \large { m_p^2 \over 2} } ~ R - { \large {1 \over 4} } { \large ~ F_{\mu\nu}^a ~ F^{a\mu\nu} + ~ i ~ { \overline { \psi ^{ \small i } } } ~\gamma^\mu ~ D_\mu \psi^{ \small i } + \left( { \overline { \psi_{\small L}^{\small ~ i } } } ~ V_{ij} ~ \Phi ~ \psi_{\small R}^{ \small ~ j } + h.c.\right) - | D_\mu ~ \Phi |^2 - V(\Phi) } }\Bigg] \Bigg\}
.
{ \Large \int _{ {k<\Lambda} } } { { [Dg][DA][D\psi][D\Phi] } } ~ exp ~ { \large \{ } ~ i Quantum Mechanics k<\Lambda Ultraviolet cutoff, energy limit for valid calculation Dg gravitons DA bosonic force fields D \psi fermions D \Phi Higgs { \Large \int } d^4 x \sqrt{-g} integral over curved spacetime \sqrt{-g} curvature of spacetime { \large { m_p^2 \over 2 } } ~ R gravity m_p Planck mass R Curvature scalar h.c. hermetian conjugate; use only real part of complex numbers in these terms F Field Strength tensor F_{\mu\nu}^a ~ F^{a\mu\nu} other forces like electromagnetism i ~ { \overline { \psi ^{ \small i } } } ~\gamma^\mu ~ D_\mu \psi^{ \small i } + \left( { \overline { \psi_{\small L}^{\small ~ i } } } ~ V_{ij} ~ \Phi ~ \psi_{ \small R }^{ \small ~ j } + h.c.\right) matter V_{ij} Mixing matrix, fermion decay subscripts L and R Left-handed and right-handed fields work differently, parity violation | D_\mu ~ \Phi |^2 - V(\Phi) Higgs kinetic and potential terms, always nonzero
SeanCarroll (last edited 2019-03-24 01:17:27 by KeithLofstrom) |
## Trigonometry (11th Edition) Clone
$$\frac{\tan80^\circ+\tan55^\circ}{1-\tan80^\circ\tan55^\circ}=-1$$
$$X=\frac{\tan80^\circ+\tan55^\circ}{1-\tan80^\circ\tan55^\circ}$$ From the identity of the sum of tangent: $$\frac{\tan A+\tan B}{1-\tan A\tan B}=\tan(A+B)$$ So here $X$ actually follows the above identity with $A=80^\circ$ and $B=55^\circ$. Therefore, $X$ can also be rewritten as $$X=\tan(80^\circ+55^\circ)$$ $$X=\tan135^\circ$$ As we know from Section 5.3: $$\tan\theta=\cot(90^\circ-\theta)$$ So, $$X=\tan135^\circ=\cot(90^\circ-135^\circ)=\cot(-45^\circ)$$ Also, from Negative-Angle Identities: $$\cot(-\theta)=-\cot\theta$$ Therefore, $$X=-\cot45^\circ$$ $$X=-1$$ Overall, $$\frac{\tan80^\circ+\tan55^\circ}{1-\tan80^\circ\tan55^\circ}=-1$$ |
# General¶
## Overview¶
With enhavo you are able to set global values for your storage and strategy. Additionally you can define these values for every newsletter subscription form individually. You are able to set one or more group for every subscriber globally and add additional groups per subscription form.
## Default Storage Type¶
The default storage type is applied to every subscription form on your site if you don’t override it. There are currently two storage types - ‘local’ and ‘cleverreach’. ‘local’ is the default value - you need no entry in your app/config/enhavo.yml. If you want to use Clever Reach, put the following statement in your app/config/enhavo.yml and follow the instructions in the Clever Reach Configuration help file.
enhavo_newsletter:
storage:
default: cleverreach
## Default Groups¶
You can associate subscribers with groups. This is mandatory for Clever Reach and optional for the local storage. To set default groups use the following example
enhavo_newsletter:
storage:
groups:
defaults:
- group1
- group2
- ..
## Default Strategy¶
There are currently 3 different subscription strategies: notify, accept and double_opt_in. The default strategy is notify - you don’t need to add the following statement if you want to use it. To set another default strategy use this statement
enhavo_newsletter:
strategy:
default: double_opt_in
## Individual Form Settings¶
You are able to override the default settings for storage, strategy and groups for every individual form. Also you can define the type and template individually. Do it as follows
enhavo_newsletter:
forms:
<form_name>:
default_groups:
- 'code_of_group3'
storage:
type: local
strategy:
type: accept
enhavo_newsletter: |
# Rod partially handing off edge of table
A uniform rigid rod of length $L$ lies at the edge of a frictionless table so length $x$ of the rod rests on the table and the rest is beyond its edge.
Intuition suggests that the rod will stay like this unless $x$ is smaller than $L/2$ which is when the centre of mass hangs off the table. However, if this were true, then I am left with a dilemma. While $x > L/2$, to ensure each infinitesimal section $\mu \delta x$ of the rod is in equilibrium ($\mu$ is mass density of rod), there must be a reaction force = $g\mu \delta x$ on it. But then the total force on the rod would be $-mg(L-x)$, so its centre of mass must fall.
What does this mean? Is it impossible for a uniform rod to partially hang off the edge of the table while being in equilibrium? Or has something gone wrong with my analysis? |
## R exam
Following a long tradition (!) of changing the modus vivendi of each exam in our exploratory statistics with R class, we decided this year to give the students a large collection of exercises prior to the exam and to pick five among them to the exam, the students having to solve two and only two of them. (The exercises are available in French on my webpage.) This worked beyond our expectations in that the overwhelming majority of students went over all the exercises and did really (too) well at the exam! Next year, we will hopefully increase the collection of exercises and also prohibit written notes during the exam (to avoid a possible division of labour among the students).
Incidentally, we found a few (true) gems in the solutions, incl. an harmonic mean resolution of the approximation of the integral
$\int_2^\infty x^4 e^{-x}\,\text{d}x=\Gamma(5,2)$
since some students generated from the distribution with density f proportional to the integrand over [2,∞) [a truncated gamma] and then took the estimator
$\dfrac{1-e^{-2}}{\frac{1}{n}\,\sum_{i=1}^n y_i^{-4}}\approx\dfrac{\int_2^\infty e^{-x}\,\text{d}x}{\mathbb{E}[X^{-4}]}\quad\text{when}\quad X\sim f$
although we expected them to simulate directly from the exponential and average the sample to the fourth power… In this specific situation, the (dreaded) harmonic mean estimator has a finite variance! To wit;
> y=rgamma(shape=5,n=10^5)
> pgamma(2,5,low=FALSE)*gamma(5)
[1] 22.73633
> integrate(f=function(x){x^4*exp(-x)},2,Inf)
22.73633 with absolute error < 0.0017
> pgamma(2,1,low=FALSE)/mean(y[y>2]^{-4})
[1] 22.92461
> z=rgamma(shape=1,n=10^5)
> mean((z>2)*z^4)
[1] 23.92876
So the harmonic means does better than the regular Monte Carlo estimate in this case!
### 2 Responses to “R exam”
1. […] of my students wrote the following code for his R exam, trying to do accept-reject simulation (of a Rayleigh distribution) and constant approximation at […]
2. […] for frequentist inference. I spent the past two weeks teaching non-parametric bootstrap to my R class and the students are now fluent with the concept, even though they are unsure about the meaning of […] |
# How to produce a biholomorphism
If one deals with a simply-connected domain in the complex plane which is not the whole plane then it is easy to construct the biholomorphism mapping it to the unit disc. This can be done by means of the Bergman kernel and the construction is as "explicit" as is the kernel. My question is about dimension higher than one. Given two domains for which one knows in advance that they are biholomorphic, are there any methods (I don't know maybe sheaf theoretic or using $\bar\partial$ theory) or procedures to obtain the biholomorphism between them? Most of the literature deals with the problem of distinguishing when two domains are not biholomorphic so it is not helpful.
-
I talked to my advisor about this and he doesn't think there is much. The fact that having the bergman kernels allows you to construct the biholomorphism in 1 complex variable is really kind of a fluke: The "change of coordinates" formula for the bergman kernel in higher dimensions involves the determinant of the complex Jacobian. It is pretty rare that you would somehow know that two domains were biholomorphic without having an explicit map in higher dimensions: we don't have anything like a Riemann mapping theorem. – Steven Gubkin Sep 18 '12 at 0:10 |
# EXTENDED SPECTROSCOPY OF $C_{2}H_{3}^{+}$ USING A HOLLOW CATHODE DISCHARGE
Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/18532
Files Size Format View
1993-RD-04.jpg 88.13Kb JPEG image
Title: EXTENDED SPECTROSCOPY OF $C_{2}H_{3}^{+}$ USING A HOLLOW CATHODE DISCHARGE Creators: Gabrys, C. M.; Uy, Dairene; Jagod, M.- F.; Oka, T. Issue Date: 1993 Publisher: Ohio State University Abstract: We have constructed a 3m hollow cathode tube to conduct infrared spectroscopy of protonated carbocations. This system has been applied to $C_{2}H_{2}/H_{2}$ discharges. As shown by Amano, we obtained an ion spectrum of protonated acetylene almost exclusively, in contrast to our earlier scans of positive column discharges which contained spectra of many other carbocations such as $CH_{3}. CH_{2}^{+}$ and $C2H_{2}^{+}$ as well as of $C_{2}H^{+}_{3}$ Our discharge assumes a length of 1.6m when the cathode is filled with a flowing mixture of $C_{2}H_{2}/H_{2} = 3/112$ mTorr and powered with 0.5 amp RMS at 11 kHz. Single-mode IR radiation from our difference frequency spectrometer was reflected 20 times through the discharge off a vacuum enclosed White cell, detected with noise subtraction, and demodulated at the discharge frequency to selectively display the concentration modulated ion absorption lines. We obtained a signal to noise of 70 for the strongest spectral lines. The purity and strength of the observed $C_{2}H_{3} v_{6} = 3142.165 cm^{-1}$ band together with the recent millimeter wave results by the Lille group2 have allowed us to considerably extend and strengthen our previous assignment; transitions up to $J =25$ and $K_{a} = 4$ have been assigned using ground state combination differences. The observed splitting in the excited state due to proton tunneling will be presented. From the intensity pattern we estimate a rotational temperature of 250 K for $C_{2}H^{+}_{3}$ in this water-cooled plasma. Description: $^{1}$ M. W. Crofton, M.-F. Jagod. B. D. Rehfuss, and T. Oka. J. Chem. Phys. 91, 5139(1989) $^{2}$ M. Bogey. M. Cordonnier. C. Demuynck, and J. L. Destombes. Astrophys. J. 399. L 103-L105 (1992) Author Institution: Department of Chemistry, University of Chicago, Chicago, IL URI: http://hdl.handle.net/1811/18532 Other Identifiers: 1993-RD-4 |
# Ols And Machines Mining
## Ordinary Least Squares — Data Science Notes
2021-2-2 The OLS estimator can be shown be unique by convexity as for any convex function will have a unique global minimum. The second-order convexity conditions state that a function is convex if it continuous, twice differentiable, and has an associated Hessian matrix that is positive semi-definite.
Get Price
## ECONOMIC ANALYSIS TO OLS FOR MINERAL PRO JECTS
2005-12-27 building autonomous mining machines (difficult, but with a clear pay-off). Exploration Delineation. In the mining industry, this means finding out with reasonable certainty what is there to be mined, and then building a mathematical model of precisely where it is and how it will be attacked.
Get Price
## The Key For Your Needs – Asics Miners SA
We are experienced miners and specialists mining machines retailers. We advice you according to your funds. We redefine the cryptocurrency mining in Africa. We will be glad to share our experience with you. For more information mind to contact us on Whatsapp,
Get Price
## THE IMPACT OF SOLID MINERALS RESOURCES ON
2019-2-27 Solid minerals, Economic Growth, Exports, Exchange Rate, OLS 1. BACKGROUND TO THE STUDY Mining is one of the oldest economic activities in Nigeria, dating back to 340BC. Early mining activity involved the extraction of gold and metallic substances. Most states have identified extensive mineral resources. However, most of this is unquantified.
Get Price
## A Combination Method for Averaging OLS and GLS
Therein, P m = X m (X m ′ X m) − 1 X m ′ is the projection matrix of the m th regression model for OLS with m = 1, ⋯, M 1, and G m ≡ X m (X m ′ Ω − 1 X m) − 1 X m ′ Ω − 1 with X m being the independent variable matrix of the m th regression model for GLS with m = 1, ⋯, M 2. In this paper, we only consider the situation with nested models for both OLS and GLS estimators.
Get Price
## Chapter 6 Regularized Regression Hands-On
2020-2-1 Many real-life data sets, like those common to text mining and genomic studies are wide, meaning they contain a larger number of features ($$p > n$$).As p increases, we’re more likely to violate some of the OLS assumptions and alternative approaches should be considered. This was briefly illustrated in Chapter 4 where the presence of multicollinearity was diminishing the interpretability of ...
Get Price
## r - Using OLS estimators in Binary models - Cross
2021-6-6 Actually, quadratic loss function $\mathcal L (y,\hat y)=(y-\hat y)^2$ and OLS can be applied to binary outputs. Some people do it. Some people do it. However, when the dependent variable (DV) is binary, usually, cross entropy loss $y \ln \hat y$ is used.
Get Price
## r - OLS estimators for non-linear models - Cross
2021-6-6 3 Answers3. For a linear model the OLS estimator corresponds to the maximum-likelihood estimator (MLE), which has various good estimation properties. This is not true for non-linear models. In the latter case we can fit the model using the MLE or we can use iteratively reweighted least squares. Minimizing square loss can be fine when the model ...
Get Price
## Ordinary Least Squares (OLS) using statsmodels -
2020-7-17 In OLS method, we have to choose the values of and such that, the total sum of squares of the difference between the calculated and observed values of y, is minimised. Formula for OLS: Where, = predicted value for the ith observation. = actual value for the ith observation. =
Get Price
## FM(Factorization Machines)的理论与实践 - 知乎
2020-8-26 LLSean/data-mining 本文使用的数据是movielens-100k,数据包括u.item,u.user,ua.base及ua.test,u.item ... FM算法论文 Factorization Machines 阅读笔记 深入FFM原理与实践 编辑于 2020-08-26 机器学习 数据挖掘 推荐系统 ...
Get Price
## ECONOMIC ANALYSIS TO OLS FOR MINERAL PRO JECTS
2005-12-27 building autonomous mining machines (difficult, but with a clear pay-off). Exploration Delineation. In the mining industry, this means finding out with reasonable certainty what is there to be mined, and then building a mathematical model of precisely where it is and how it will be attacked.
Get Price
## DB-OLS: An Approach for IDS
2010-10-30 propose a model “DB-OLS: An Approach for IDS ” which is a Deviation Based-Outlier approach for Intrusion detection using Self Organizing Maps. In this model “Self Organizing Map” approach is to be used for behavior learning and “Outlier mining” approach, for detecting an intruder by calculating deviation from known user profile.
Get Price
## Offline Separator - OLS 10 HYDAC
The OffLine Separator OLS is a dewatering unit. It serves for hydraulic oils, light gear oil and diesel fuels with densities of less than 950 kg/m3. The dewatering process works according to the coalescence principle. Therefore, it means that there is a combination
Get Price
## THE IMPACT OF SOLID MINERALS RESOURCES ON
2019-2-27 Solid minerals, Economic Growth, Exports, Exchange Rate, OLS 1. BACKGROUND TO THE STUDY Mining is one of the oldest economic activities in Nigeria, dating back to 340BC. Early mining activity involved the extraction of gold and metallic substances. Most states have identified extensive mineral resources. However, most of this is unquantified.
Get Price
## r - Using OLS estimators in Binary models - Cross
2021-6-6 Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. It only takes a minute to sign up. Sign up to join this community
Get Price
## Chapter 6 Regularized Regression Hands-On
2020-2-1 Many real-life data sets, like those common to text mining and genomic studies are wide, meaning they contain a larger number of features ($$p > n$$).As p increases, we’re more likely to violate some of the OLS assumptions and alternative approaches should be considered. This was briefly illustrated in Chapter 4 where the presence of multicollinearity was diminishing the interpretability of ...
Get Price
## r - OLS estimators for non-linear models - Cross
2021-6-6 3 Answers3. For a linear model the OLS estimator corresponds to the maximum-likelihood estimator (MLE), which has various good estimation properties. This is not true for non-linear models. In the latter case we can fit the model using the MLE or we can use iteratively reweighted least squares. Minimizing square loss can be fine when the model ...
Get Price
## Ordinary Least Squares (OLS) using statsmodels -
2020-7-17 Ordinary Least Squares (OLS) using statsmodels. In this article, we will use Python’s statsmodels module to implement Ordinary Least Squares ( OLS) method of linear regression. In OLS method, we have to choose the values of and such that, the total sum of squares of the difference between the calculated and observed values of y, is minimised.
Get Price
## Data Mining: Practical Machine Learning Tools and
Data Mining: Practical Machine Learning Tools and Techniques. Machine learning provides an exciting set of technologies that includes practical tools for analyzing data and making predictions but also powers the latest advances in artificial intelligence.
Get Price
## FM(Factorization Machines)的理论与实践 - 知乎
2020-8-26 LLSean/data-mining 本文使用的数据是movielens-100k,数据包括u.item,u.user,ua.base及ua.test,u.item ... FM算法论文 Factorization Machines 阅读笔记 深入FFM原理与实践 编辑于 2020-08-26 机器学习 数据挖掘 推荐系统 ...
Get Price
## ECONOMIC ANALYSIS TO OLS FOR MINERAL PRO JECTS
2005-12-27 building autonomous mining machines (difficult, but with a clear pay-off). Exploration Delineation. In the mining industry, this means finding out with reasonable certainty what is there to be mined, and then building a mathematical model of precisely where it is and how it will be attacked.
Get Price
## DB-OLS: An Approach for IDS
2010-10-30 propose a model “DB-OLS: An Approach for IDS ” which is a Deviation Based-Outlier approach for Intrusion detection using Self Organizing Maps. In this model “Self Organizing Map” approach is to be used for behavior learning and “Outlier mining” approach, for detecting an intruder by calculating deviation from known user profile.
Get Price
## Lecture 5 MACHINE LEARNING - ssc.wisc.edu
2019-7-20 Learning Machines: Daleks? Bruce Hansen (University of Wisconsin) Machine Learning July 22-26, 2019 2 / 99. ... Data Mining, Inferenece, and Prediction I Today™s Lecture is extracted from this textbook James, Witten, Hastie, and Tibshirani (2013) An Introduction to ... If the OLS estimator is filargefl, the penalty pushes it towards zero ...
Get Price
## Chapter 6 Regularized Regression Hands-On
2020-2-1 Many real-life data sets, like those common to text mining and genomic studies are wide, meaning they contain a larger number of features ($$p > n$$).As p increases, we’re more likely to violate some of the OLS assumptions and alternative approaches should be considered. This was briefly illustrated in Chapter 4 where the presence of multicollinearity was diminishing the interpretability of ...
Get Price
## Data Mining: Practical Machine Learning Tools and ...
Data Mining: Practical Machine Learning Tools and Techniques, Third Edition, offers a thorough grounding in machine learning concepts as well as practical advice on applying machine learning tools and techniques in real-world data mining situations.This highly anticipated third edition of the most acclaimed work on data mining and machine learning will teach you everything you need to know ...
Get Price
## machine learning - What is the difference between
2021-5-12 machine-learning data-mining linear-regression. Share. Improve this question. Follow edited Feb 25 '18 at 21:14. nbro. 12.4k 19 19 gold badges 85 85 silver badges 163 163 bronze badges. asked Aug 27 '12 at 17:49. London guy London guy.
Get Price
## CryptoMining: Energy Use and Local Impact
2019-6-28 Setting 1: Total consumption of electricity is large Digiconomist: •Current use: 0.3% of world energy •Could power 6.3M US households De Vries (2018) in Joule •ST Projection: 0.5% of world energy •Implication: 10.5M US households Bitmain IPO , Cambridge (2018) •Manufacturer –market share: 67% •Recent sales: 4.2 million machines •Energy use of these machines >
Get Price
## least squares - Which OLS assumptions are colliders ...
2021-5-21 $\begingroup$ (+1) nice one @DemetriPananos . Similar things happen when looking at bias due to confounding, mediation, differential selection etc. The OLS estimates can of course be unbiased for that particular model but the problem is that the model is mis-specified if we wish to estimate the total causal effect of some exposure on an outcome. Of courses if we want just direct effects (eg in ...
Get Price
## The 20 Best AI and Machine Learning Software and
1 天前 Weka is a machine learning software in Java with a wide range of machine learning algorithms for data mining tasks. It consists of several tools for data preparation, classification, regression, clustering, association rules mining, and visualization. You
Get Price
## FM(Factorization Machines)的理论与实践 - 知乎
2020-8-26 LLSean/data-mining 本文使用的数据是movielens-100k,数据包括u.item,u.user,ua.base及ua.test,u.item ... FM算法论文 Factorization Machines 阅读笔记 深入FFM原理与实践 编辑于 2020-08-26 机器学习 数据挖掘 推荐系统 ...
Get Price
## ECONOMIC ANALYSIS TO OLS FOR MINERAL PRO JECTS
2005-12-27 building autonomous mining machines (difficult, but with a clear pay-off). Exploration Delineation. In the mining industry, this means finding out with reasonable certainty what is there to be mined, and then building a mathematical model of precisely where it is and how it will be attacked.
Get Price
## Lecture 5 MACHINE LEARNING - ssc.wisc.edu
2019-7-20 Learning Machines: Daleks? Bruce Hansen (University of Wisconsin) Machine Learning July 22-26, 2019 2 / 99. ... Data Mining, Inferenece, and Prediction I Today™s Lecture is extracted from this textbook James, Witten, Hastie, and Tibshirani (2013) An Introduction to ... If the OLS estimator is filargefl, the penalty pushes it towards zero ...
Get Price
## DB-OLS: An Approach for IDS
2010-10-30 propose a model “DB-OLS: An Approach for IDS ” which is a Deviation Based-Outlier approach for Intrusion detection using Self Organizing Maps. In this model “Self Organizing Map” approach is to be used for behavior learning and “Outlier mining” approach, for detecting an intruder by calculating deviation from known user profile.
Get Price
## Chapter 6 Regularized Regression Hands-On
2020-2-1 Many real-life data sets, like those common to text mining and genomic studies are wide, meaning they contain a larger number of features ($$p > n$$).As p increases, we’re more likely to violate some of the OLS assumptions and alternative approaches should be considered. This was briefly illustrated in Chapter 4 where the presence of multicollinearity was diminishing the interpretability of ...
Get Price
## Automation risk in the EU labour market A skill-needs
2018-11-22 mining approach employed in the paper. The views expressed in the paper are solely the ... of EU employees being in jobs with high risk of substitutability by machines, robots or other algorithmic processes, and uncovers its impact on labour market outcomes. Using relevant ... Table 4: Labour market impact of automation risk, OLS estimates ...
Get Price
## least squares - Which OLS assumptions are colliders ...
2021-5-21 $\begingroup$ (+1) nice one @DemetriPananos . Similar things happen when looking at bias due to confounding, mediation, differential selection etc. The OLS estimates can of course be unbiased for that particular model but the problem is that the model is mis-specified if we wish to estimate the total causal effect of some exposure on an outcome. Of courses if we want just direct effects (eg in ...
Get Price
## Statistics 36-462/662: Data Mining (Spring 2020)
2020-4-28 Statistics 36-462/662: Data Mining Spring 2020 Prof. Cosma Shalizi Tuesdays and Thursdays 1:30--2:50 Porter Hall 100 Data mining is the art of extracting useful patterns from large bodies of data. (Metaphorically: finding seams of actionable knowledge in the raw ore of information.)
Get Price
## CryptoMining: Energy Use and Local Impact
2019-6-28 Setting 1: Total consumption of electricity is large Digiconomist: •Current use: 0.3% of world energy •Could power 6.3M US households De Vries (2018) in Joule •ST Projection: 0.5% of world energy •Implication: 10.5M US households Bitmain IPO , Cambridge (2018) •Manufacturer –market share: 67% •Recent sales: 4.2 million machines •Energy use of these machines >
Get Price
## (PDF) Improving Fractional Impervious Surface
Impervious surface area (ISA) is an important parameter for many studies such as urban climate, urban environmental change, and air pollution; however, mapping ISA at the regional or global scale is still challenging due to the complexity of
Get Price
## 数据挖掘权威教材-The Elements of Statistical Learning ...
2009-4-13 Data mining is a field developed by computer scientists but many of its crucial elements are imbedded in important and subtle statistical concepts. Statisticians can play an important role in the development of this field but as was the case with artificial intelligence, expert systems and neural networks the statistical research community has been slow to respond.
Get Price |
# Materials with good thermal insulating properties
I'm doing a research for a university project.
In particular I'm looking for a "commercial" material (so a material that is available on the market or can be home made) that has good thermal insulating properties.
In particular the material has to last for about 300sec in an environment up to 2000C and the inner part of it has to be as cold as possible.
I have to choose parts for something like a satellite that should rotate around the sun or a star.
I did a lot of research and in particular I focus on the NASA projects. I saw a lot of video of them about "insulation tiles" and some other thermal coating but I think a common person cannot buy them or make at home.
I hope that this is the right community where ask the question.
Edit
Are there no other constraints? Yes the material should be as light as possible.
Because then you could just use a very thick graphite (or copper, for that matter) layer, thick enough that after 300 s the inner part still was close to the original temperature. Most likely you want something a tad lighter. I think that the problem is related to the temperature of 2000C on the outer layer, a lot of materials have a melting point lower that 2000C. I was also thinking something like a layer of a very high melting point material covered by a protective coating, the problem is related to the 2000C.
How would you launch your satellite, if you are restricted to things a person can buy or make at home? This is the question that I try to answer with my research, is it possible to build "something like a satellite" with commercial products? My focus is only on the thermal aspects.
Are you thinking about the thermal shield of the satellite? Yes basically a thermal shield, the material should be used to protect the inner part of the satellite from the "heat radiation" which came from the sun, during the takeoff .... The first idea is something that completely envelops the satellite not considering "the engine part". I perform same calculation I find that the most critical condition are: temperature of 2000C for a time of 300s.
Is it going to be in some atmosphere or in vacuum during these 300 s? At the moment I'm considering both scenarios, but I think the most challenging is the one in which there is atmosphere.
• Are there no other constraints? Because then you could just use a very thick graphite (or copper, for that matter) layer, thick enough that after 300 s the inner part still was close to the original temperature. Most likely you want something a tad lighter. Going in that direction I would look at silica aerogel, which unfortunately has a melting point of 1,473 K but otherwise close to best available thermal conductivity, 0.03 W/(m·K) in air and better in vacuum. – Anders Sandberg Apr 19 '19 at 21:15
• How would you launch your satellite, if you are restricted to things a person can buy or make at home? And are thinking about the thermal shield of the satellite? Is it going to be in some atmosphere or in vacuum during these 300 s? – nasu Apr 20 '19 at 1:31
• I update my answer. – Ugo Mela Apr 20 '19 at 13:42
• Thank you! @AndersSandberg – Ugo Mela Apr 20 '19 at 14:24
• How did you get the 2000 degrees value? Before you look for an answer it helps a lot if you make sure you have the right question. Why do you even need this shield? For the launching phase? For the re-entry? What does it mean to have a satellite around the Sun? How far? And looking for a material that does not melt at the specific temperature may not be the best thing. Melting and evaporating the shield dissipates a lot of heat and it may keep the rest cool. It is a one time only shield or multiple uses? – nasu Apr 21 '19 at 15:37
You are interested in three different classes of properties. One is the ability to withstand high temperatures and low pressures. I would put this within a class called the environmental properties of the material. The second is the ability to maintain a low heat flux under a high temperature gradient. This is a true thermal property. The third is a physical characteristic or intrinsic property of the material ... its density. This is not truly a mechanical property because we just measure this value, we do not "do something" to the material to determine a property.
You could start with the environmental properties and then down-select to thermal and denisty. The better approach however is to start with the thermal insulation properties of materials and refine further based on the environmental. It is better because in this way, you make thermal insulation your go/no-go criteria and you make environmental integrity simply a max/min criteria. When the material does not meet your thermal performance goal, it fails. When it does not meet the environmental goals, it could still be viable if it is only used within a restricted (shorter) period of time.
How does density appear? It appears in one of two ways. For the case at hand, one way is to state that your material must have a density that is as low as possible. Then, density is a minimum selection criteria. Alternatively and perhaps better, you can fold density in to the thermal property. In this case, rather than searching for materials with the lowest thermal conductivity, you search for materials with an appropriate scaling of thermal conductivity and density. What is appropriate? The truth is that you want the lowest thermal conductivity per unit mass. In this case, take their ratio.
Start with research to find materials that have low thermal conductivity $$k$$. Add a column for density $$\rho$$ and the ratio $$k/\rho$$. Add columns to the list that define metrics for the thermal and vacuum integrity. The former could be the melting or decomposition temperature of the material. The latter could be the vapor pressure of the material (at room temperature).
To assess whether the material meets your thermal metric, find materials with the lowest specific thermal conductivity $$k/\rho$$. Then select viable materials that have acceptable metrics for the thermal and pressure stability.
Once you have a selection of candidate materials, you can proceed to search for commercial products that contain that material. Alternatively, you can search for commercial sources of the material in a form that allows you to make your own thermal insulation.
As a reference to the optimization approach, I suggest the book from Ashby on Materials Selection for Mechanical Designs.
Just to add to the accepted answer. A material that would work would be a ceramic. No synthesizing aerogels needed.
You can also buy single crystal sapphire for pretty cheap and the thermal conductivity goes down upon heating. I used these kinds of materials a lot for research under ultra high vacuum (10^-10 torr) with a sample reaching around 2000-2200 K. I had liquid nitrogen on top (77 K) and the heated sample on bottom separated by the sapphire. Be careful because it's brittle, but also transparent which is nice.
There you go, something you can buy. Also if you need to measure the temperature you will probably need a calibrated Type C thermocouple, since Type K (a very common one, Chromel Alumel) will melt! |
# (a) Find RL in the network below to achieve maximum power transfer. (b) What is the...
###### Question:
(a) Find RL in the network below to achieve maximum power transfer. (b) What is the maximum power? 2 ΚΩ 2 kn 12V 32 kn 32 kn SRL
#### Similar Solved Questions
##### Use the Principle of Mathematical Induction t0 prove that for every integer n € N_ 12 + 22 + 32+42+n2 = "(tu(ntl
Use the Principle of Mathematical Induction t0 prove that for every integer n € N_ 12 + 22 + 32+42+n2 = "(tu(ntl...
##### Find the area under the graph of the function over the interval given. y = eX;...
Find the area under the graph of the function over the interval given. y = eX; [-9,8] A) e17 B) e8-09 C) e8 + 9 D) e8...
##### 19.- Many scheduling algorithms rely on remaining execution time (RET) as a parameter to decide on...
19.- Many scheduling algorithms rely on remaining execution time (RET) as a parameter to decide on what process to schedule next. Actual RET is hard or impossible to compute efficiently. Explain how real schedulers approximate RET....
##### ConstantsCorrectPant BFor Ihe reaciionX(g) 3Y(e) = 2Z(6)2 62*10-2 ata tempcralure of 271 Calculate the value of Rc Express YouI Jnstei numerically: View Available Hint(s)AEdCor1.04 105suuttuPuwials AnshinIncorrect; Try Agjin; attempl: remjining Check your valua of AnProxidereedhaci
Constants Correct Pant B For Ihe reaciion X(g) 3Y(e) = 2Z(6) 2 62*10-2 ata tempcralure of 271 Calculate the value of Rc Express YouI Jnstei numerically: View Available Hint(s) AEd Cor 1.04 105 suuttu Puwials Anshin Incorrect; Try Agjin; attempl: remjining Check your valua of An Proxidereedhaci...
##### Mary issues common stock in exchange for legal services received. The common stock has a fair...
Mary issues common stock in exchange for legal services received. The common stock has a fair value of $3,000 and a par value of$500. By what amount did this transaction affect Mary’s total shareholder equity? (ignore taxes) By what amount did this transaction affect Mary’s net income f...
##### Rate of disappearance and formation
Consider this reaction:5 NO(g) + 3 MnO4-(aq) + 4 H+(aq) --> 5 NO3-(aq) + 3Mn2(aq) + 2 H2O(l)Under certain conditions the rate of disappearance of NO is 0.134 M/s.a. What is the rate of disappearance of permanganate, MnO4-under these conditions?b. What is the rate of formation of water under these...
##### Explain the difference between removable andnonremovable discontinuity. (Hint: See Defn 2.5. O p.9 of the Week 3 Handout.)
Explain the difference between removable andnonremovable discontinuity. (Hint: See Defn 2.5. O p.9 of the Week 3 Handout.)...
##### When substance X reacts with elemental oxygen according to theequation:2 X (s) + O2 (g) → 2 XO2 (s), the standard enthalpyfor the reaction is -566 kJ. What is the standard enthalpy offormation for substance X if ΔH°f of XO2 = -393.5kJ/mol.
When substance X reacts with elemental oxygen according to the equation: 2 X (s) + O2 (g) → 2 XO2 (s), the standard enthalpy for the reaction is -566 kJ. What is the standard enthalpy of formation for substance X if ΔH°f of XO2 = -393.5 kJ/mol....
##### Conservation of Momentum and Vector Operationsi c +w)&A car with mass 1,247.4 kg is traveling cast through an intersection when truck with mass 2,519.3 kg traveling north with speed 50.0 km/h through the intersection crashes into the car; The two vehicles stick to each other after the collision and they - skid from the impact zone at an angle 0 = 59" north of east: What is the magnitude of the final velocity of the two vehicles together after the collision, in km/h?Hint: the momentum co
Conservation of Momentum and Vector Operations i c +w)& A car with mass 1,247.4 kg is traveling cast through an intersection when truck with mass 2,519.3 kg traveling north with speed 50.0 km/h through the intersection crashes into the car; The two vehicles stick to each other after the collisio...
##### Find all functions flx) with the following propertiesf'(x)=0.3 e 0.5xf(0) =4fx)
Find all functions flx) with the following properties f'(x)=0.3 e 0.5x f(0) =4 fx)...
##### Michigan State University researchers want to investigate how rainfall affects the yield of crops in East...
Michigan State University researchers want to investigate how rainfall affects the yield of crops in East Lansing. The researchers found that the annual mean amount of rainfall is about 220 inches and the standard deviation is about 12.2 inches. The mean yield of crops in East Lansing is about 230 t...
##### 1 not need to numbers do (Positive 8 compound: following the 5 number oxidation the What is U
1 not need to numbers do (Positive 8 compound: following the 5 number oxidation the What is U...
##### We would expect to see an increase in government spending lead to in planned aggregate expenditure...
We would expect to see an increase in government spending lead to in planned aggregate expenditure and in real planned investment increases; decreases decreases; decreases increases; increases decreases; increases Question 20 5 pts Among the most important problems of implementing fiscal policy incl...
##### In 5 days she made 80 sandcastles
in 5 days she made 80 sandcastles.each day she made 4 fewer castles than the day before.how many castles did she make each day?Lisa went on making 4 fewer castles each day.how many castles did she make altogether?...
##### Draw step 1 of the mechanism for the formation of this product. Include lone pairs and...
Draw step 1 of the mechanism for the formation of this product. Include lone pairs and formal charges in your answer. Do not explicitly draw out any hydrogen atoms in this step of the mechanism. :ÖH H3C CH3 + CH3...
##### A 10-kg block is attached to a vertical spring of constant 2000 N/m and slowly lowered...
A 10-kg block is attached to a vertical spring of constant 2000 N/m and slowly lowered to its equilibrium position. The block is now pulled down a distance of 3 cm and released from rest and executes vertical SHM. What is the net force on the block when it is at its lowest position? (Assume g = 10 m...
##### Question 9.56 Lily aldehyde; used in perfumes, Can be made starting with mixed aldol condensution betwcen two dillerent aldehydes Provida their structures;(CH; €CH CHCHEOCH alehy kcQuestion %.57 Lc bcnzuldchyde tcucts wIth acctone and busc Io glvc 4 Ycllaw crystalline produce. €,H;,O Dcducc Ia struc (ure aud exelaln ho M E5 dot mcd
Question 9.56 Lily aldehyde; used in perfumes, Can be made starting with mixed aldol condensution betwcen two dillerent aldehydes Provida their structures; (CH; € CH CHCHEO CH alehy kc Question %.57 Lc bcnzuldchyde tcucts wIth acctone and busc Io glvc 4 Ycllaw crystalline produce. €,H;,O...
##### What is the limit of ( 1/(x-1) - 2/(x^2-1) ) as x approaches 1?
What is the limit of ( 1/(x-1) - 2/(x^2-1) ) as x approaches 1?...
##### Identify the hybridization of the carbon atoms in the molecule below, pargyline
Identify the hybridization of the carbon atoms in the molecule below, pargyline ...
##### An ideal gas in a cylinder is compressed very slowly to one-third its original volume while its temperature is held constant. The work required to accomplish this task is $75 mathrm{~J}$.(a) What is the change in the internal energy of the gas?(b) How much energy is transferred to the gas by heat in this process?
An ideal gas in a cylinder is compressed very slowly to one-third its original volume while its temperature is held constant. The work required to accomplish this task is $75 mathrm{~J}$. (a) What is the change in the internal energy of the gas? (b) How much energy is transferred to the gas by heat ...
##### The gra; Nticnalfunction f is shown below, Rssume that all osyiptotes and (ntorcepts are shown &nd that the graph has no holes" , Use the graph to complete the followlng(4} Flnd all x-Intercepts ard y-intarcepts; Check a" that upply Intercepe(5); NoneNonauntercectis): Witto the equatlans (or venticol ond horizontal asymptotes. Enterthc cquationg Using the "and" button 2 Nncneann Select "Nonc OcceuryVecica oevmatote(s):Horlrorital asyrnptote(sFlnd Inc demali anu [Qune
The gra; Nticnalfunction f is shown below, Rssume that all osyiptotes and (ntorcepts are shown &nd that the graph has no holes" , Use the graph to complete the followlng (4} Flnd all x-Intercepts ard y-intarcepts; Check a" that upply Intercepe(5); None Nona untercectis): Witto the equa...
##### Studcm ho uking coliscs Ccnnmcncwauestudy Ihe dceterninants ol Ilc bulunce of ruymenty in Maluy 1 Model developed u 14 follows:IP = p+ p; Xr + B,Mt , BK+MA,+ewilhDulunce puytnent (RM Mil ) export (RM mil) [upont (RM mil) Uruss Dunestic Froducl RM mill Forcign Exchnge RM USDICumpuler uulpul showz the uulysis: Dcpulecul * MAbIc Sample Range; |470-Z6U ObechelualYurublcSE Lul 1332 Uut si-ul 150271 L4uzuim Haniea[alue 0,4817 M76 Ml nuu Uhas 3825.42047 016 M6Susu ~talot 0.213944 Jultakheen305y Yolss
studcm ho uking coliscs Ccnnmcncwaue study Ihe dceterninants ol Ilc bulunce of ruymenty in Maluy 1 Model developed u 14 follows: IP = p+ p; Xr + B,Mt , BK+MA,+e wilh Dulunce puytnent (RM Mil ) export (RM mil) [upont (RM mil) Uruss Dunestic Froducl RM mill Forcign Exchnge RM USDI Cumpuler uulpul sh...
##### Let Ki Fzkr]/(r3 + x + 1) and let Kz Fzly]/(y? + y? + 1). problem In this we construct aI isomorphism Ki ~ Kz. Prove that r + 1 a root of the polynomial T3 + T2 + 1 in Ki[T]: Show that the kernel of the homomorphism T Felul + Ki sending to 1 + ] is surjective and has kernel the prineipal ideal (y? +y? + 1) and explain how this completes the problem;
Let Ki Fzkr]/(r3 + x + 1) and let Kz Fzly]/(y? + y? + 1). problem In this we construct aI isomorphism Ki ~ Kz. Prove that r + 1 a root of the polynomial T3 + T2 + 1 in Ki[T]: Show that the kernel of the homomorphism T Felul + Ki sending to 1 + ] is surjective and has kernel the prineipal ideal (y? +...
##### IUn COLuCCcos(t)] Z '+ [2cos(t) + 2cos?(0)]J, LT <t<t F(t) = [2 sin(t) + 2sin(t)et representee ci-dessous.
IUn COLuCC cos(t)] Z '+ [2cos(t) + 2cos?(0)]J, LT <t<t F(t) = [2 sin(t) + 2sin(t) et representee ci-dessous....
##### Which of the following is or are the synthesis pathways of carboxylic acids? 1) oxidation of aldehydes 2hydrolysis of anhydrides 3hydrolysis of ketones 2) 1 and I b) 2 and 3 c) 1 and 3d) 1
Which of the following is or are the synthesis pathways of carboxylic acids? 1) oxidation of aldehydes 2hydrolysis of anhydrides 3hydrolysis of ketones 2) 1 and I b) 2 and 3 c) 1 and 3 d) 1...
##### The vice president of human resources at Ato Enterprises feels strongly that workers need to realize the benefits of their hard work
The vice president of human resources at Ato Enterprises feels strongly that workers need to realize the benefits of their hard work. This reveals the firm's responsibility...
##### We previously considered building multiple linear regression models for gas mileage of cars based on characteristics of cach vehicle model. We can now considcr & fcw diffcrent models and attempt to determine which model is bettcr_points) Using the table of summary values below, and that WC have taken sample of 30 vchicles compute the AIC for cach of the thrce models. Based on these valucs, which model would you say is better?Model Predictors Residual Standard Error Model all 11 predictors 3.
We previously considered building multiple linear regression models for gas mileage of cars based on characteristics of cach vehicle model. We can now considcr & fcw diffcrent models and attempt to determine which model is bettcr_ points) Using the table of summary values below, and that WC have...
##### Required information (The following information applies to the questions displayed below.) Martinez Company's relevant range of...
Required information (The following information applies to the questions displayed below.) Martinez Company's relevant range of production is 7,500 units to 12,500 units. When it produces and sells 10,000 units, its average costs per unit are as follows: Direct materials Direct labor Variable ma...
##### Timelimit: hours. 1:12.26 remaining:[x]You want obtain sample estimate population propontion. Based on previous evidence; you believe the population proportion approximately 7790- You would like to be 98%u confident that your estimate _ within 390 of the Inie population proportion. How large sample size required?Points possible: This attempt of 2,SubmitBipt 6flin01q
Timelimit: hours. 1:12.26 remaining:[x] You want obtain sample estimate population propontion. Based on previous evidence; you believe the population proportion approximately 7790- You would like to be 98%u confident that your estimate _ within 390 of the Inie population proportion. How large sample...
##### Exocytosis is a process by which cells Multiple Choice release substances from the cell via vesicles....
Exocytosis is a process by which cells Multiple Choice release substances from the cell via vesicles. bring in substances from the outside via pores in the cell membrane. release substances from the cell via carrier proteins. release substances from the cell through pores in the cell membran...
##### Which of the following is NOT a measure of central tendency? a.) median b.) mean c.)...
Which of the following is NOT a measure of central tendency? a.) median b.) mean c.) mode d.) average e.) medium...
##### Ms H: 10 42 11 Two equal and opposite forces of 3 N have a netforce...
ms H: 10 42 11 Two equal and opposite forces of 3 N have a netforce of A9N B6N c3N DON Which of the following is NOT a constant for an object in uniform circular motion! A distance with time speed e velocity D acceleration magnitude Work is A energy times distance, B force times distance. c force ti... |
# SRFI 154: First-class dynamic extents
by Marc Nieper-Wißkirchen
status: final (2018/9/15)
## Abstract
Scheme has the notion of the dynamic extent of a procedure call. A number of standard Scheme procedures and syntaxes like dynamic-wind, call-with-current-continuation, and parameterize deal with the dynamic extent indirectly. The same holds true for the procedures and syntaxes dealing with continuation marks as defined by SRFI 157.
This SRFI reifies the dynamic extent into a first-class value together with a well-defined procedural interface and a syntax to create procedures that remember not only their environment at creation time but also their dynamic extent, which includes their dynamic environment. |
10 Feature Prioritization Methods
Share
Explore
Prioritization Tool Kit
# Method 9: Cost of Delay
Prioritize your features based on the cost of not having it available to users
Modified from , by - Agile Coach at Agile by Design.
Change: Cost of delay, Unit of Time, and Duration of Effort
See results auto-ordered in: CD3 Score
Answer key questions about your features to surface assumptions and determine their cost of delay and duration. Among all your features, the unit of time should be consistent, so choose either "week" or "month" and stick with that value.
WHAT IS THE IDEA/PROBLEM/OPPORTUNITY?
WHAT TYPE OF BENEFIT DOES IT PROVIDE? (Increase revenue, Protect revenue, Reduce costs, Avoid costs)
HOW WILL IT GENERATE THIS BENEFIT?
WHAT ARE ANY ASSUMPTIONS THAT NEED TO BE TESTED?
Cost of Delaying Features
16
Search
Request Name
Cost of delay
Unit of Time
Duration of Effort
CD3 Score
1
$5,000 Week (7 days) 14 days 357.1 2 F4: Security upgrade$1,500
Week (7 days)
14 days
107.1
3
F3: Video calls
$500 Week (7 days) 5 days 100.0 4 F2: Import functionality$1,000
Week (7 days)
20 days
50.0
5
F7: Localization of mobile app
$500 Week (7 days) 24 days 20.8 6 F1: New billing page$500
Week (7 days)
60 days
8.3
7
BNS-684: Notification panel redesign
\$100
Week (7 days)
14 days
7.1
8
F8: Multi-location Management
Week (7 days)
There are no rows in this table
Extra - Useful Questions
For better understanding and qualifing your features, tro to answer those questions for each feature.
Definition - What is the idea/problem/opportunity?
Benefit Type - What type of benefit does it provide?
Process - How will it generate this benefit?
Assumptions - What are any assumptions that need to be tested?
Extra - Useful Questions about Features
16
Search
Request Name
Definition
Benefit Type
Process
Assumptions
1
F1: New billing page
2
F2: Import functionality
3
F3: Video calls
4
5
6
BNS-684: Notification panel redesign
7
F7: Localization of mobile app
8
F8: Multi-location Management
There are no rows in this table
Four Types of Values
Increase revenue
This type of revenue is often related to attracting new customers, or more revenue from existing customers through the development of new products, services, or entering new markets.
Protect revenue
This type of value is to lengthen the life-cycle of current revenue streams, and avoiding current revenue streams from falling. Investment here does not generate new revenue, it protects the revenue we already have.
Reduce costs
This type of value looks to reduce any costs we are currently incurring.
Avoid costs
This type of value captures the costs we could ensure in the future and putting measures in place to avoid them. The easiest example of this is features that need to be completed by some date in order to avoid regulatory fines.
"Cost of delay" + "per (unit of time)"
This figure represents how much value is lost per unit of time by not having a feature in the market.
For the results to be valid, the unit of time measure should all be consistent and the cost of delay normalized to that time frame. For example, all values in the column should be "7 days" or "30 days".
CD3 is an abbreviation of "Cost of Delay Divided by Duration"
When we have a collection of potential feature and their CD3 scores, the CD3 score tells us know which option should be delivered first. It does this by considering which feature incurs the largest cost of delay (the cost we incur as long as the feature is not available) compared to how quickly it could be delivered.
About Cost of Delay
The first thing you may have noticed is that entering data for Cost of Delay asked a lot more questions about your features. The reason for this is that a lot of discussions around value include assumptions. The questions help individuals and groups come to a clearer common understand of a feature, and a better understanding of the value of doing it.
Another change is switching from the looking at the value a feature provides, and instead the value that is lost by not having a feature.
This more advanced technique is not to be used in all situations. If you can make a good enough prioritization decision without discovering the cost of delay and CD3 score for every feature, you should do it. Use cost of delay between a small set of items, or when the stakes are high, to get very precise guidance on sequencing features.
Cost of Delay & CD3 Links
Next up:
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP |
# Symmetry of polynomials
Learn how to determine if a polynomial function is even, odd, or neither.
#### What you should be familiar with before taking this lesson
A function is an even function if its graph is symmetric with respect to the y-axis.
Algebraically, f is an even function if f, left parenthesis, minus, x, right parenthesis, equals, f, left parenthesis, x, right parenthesis for all x.
A function is an odd function if its graph is symmetric with respect to the origin.
Algebraically, f is an odd function if f, left parenthesis, minus, x, right parenthesis, equals, minus, f, left parenthesis, x, right parenthesis for all x.
If this is new to you, we recommend that you check out our intro to symmetry of functions.
#### What you will learn in this lesson
You will learn how to determine whether a polynomial is even, odd, or neither, based on the polynomial's equation.
## Investigation: Symmetry of monomials
A monomial is a one-termed polynomial. Monomials have the form f, left parenthesis, x, right parenthesis, equals, a, x, start superscript, n, end superscript where a is a real number and n is an integer greater than or equal to 0.
In this investigation, we will analyze the symmetry of several monomials to see if we can come up with general conditions for a monomial to be even or odd.
In general, to determine whether a function f is even, odd, or neither even nor odd, we analyze the expression for f, left parenthesis, minus, x, right parenthesis:
• If f, left parenthesis, minus, x, right parenthesis is the same as f, left parenthesis, x, right parenthesis, then we know f is even.
• If f, left parenthesis, minus, x, right parenthesis is the opposite of f, left parenthesis, x, right parenthesis, then we know f is odd.
• Otherwise, it is neither even nor odd.
As a first example, let's determine whether f, left parenthesis, x, right parenthesis, equals, 4, x, start superscript, 3, end superscript is even, odd, or neither.
Here f, left parenthesis, minus, x, right parenthesis, equals, minus, f, left parenthesis, x, right parenthesis, and so function f is an odd function.
The graph of y, equals, f, left parenthesis, x, right parenthesis is symmetric with respect to the origin, which confirms our solution!
Now try some examples on your own to see if you can find a pattern.
1) Is g, left parenthesis, x, right parenthesis, equals, 3, x, start superscript, 2, end superscript even, odd, or neither?
To determine whether g is even, odd, or neither, let's find g, left parenthesis, minus, x, right parenthesis.
Since g, left parenthesis, minus, x, right parenthesis, equals, g, left parenthesis, x, right parenthesis, the function is an even function.
2) Is h, left parenthesis, x, right parenthesis, equals, minus, 2, x, start superscript, 5, end superscript even, odd, or neither?
To determine whether h is even, odd, or neither, let's find h, left parenthesis, minus, x, right parenthesis.
Since h, left parenthesis, minus, x, right parenthesis, equals, minus, h, left parenthesis, x, right parenthesis, the function is an odd function.
### Concluding the investigation
From the above problems, we see that if f is a monomial function of even degree, then function f is an even function. Similarly, if f is a monomial function of odd degree, then function f is an odd function.
Even FunctionOdd Function
Examples g, left parenthesis, x, right parenthesis, equals, 3, x, start superscript, start color purpleC, 2, end color purpleC, end superscripth, left parenthesis, x, right parenthesis, equals, minus, 2, x, start superscript, start color greenD, 5, end color greenD, end superscript
In generalf, left parenthesis, x, right parenthesis, equals, a, x, start superscript, start color purpleC, n, end color purpleC, end superscript where n is start color purpleC, e, v, e, n, end color purpleCf, left parenthesis, x, right parenthesis, equals, a, x, start superscript, start color greenD, n, end color greenD, end superscript where n is start color greenD, o, d, d, end color greenD
This is because left parenthesis, minus, x, right parenthesis, start superscript, n, end superscript, equals, x, start superscript, n, end superscript when n is even and left parenthesis, minus, x, right parenthesis, start superscript, n, end superscript, equals, minus, x, start superscript, n, end superscript when n is odd.
This is probably the reason why even and odd functions were named as such in the first place!
## Investigation: Symmetry of polynomials
In this investigation, we will examine the symmetry of polynomials with more than one term.
### Example 1: $f(x)=2x^4-3x^2-5$f, left parenthesis, x, right parenthesis, equals, 2, x, start superscript, 4, end superscript, minus, 3, x, start superscript, 2, end superscript, minus, 5
To determine whether f is even, odd, or neither, we find f, left parenthesis, minus, x, right parenthesis.
Since f, left parenthesis, minus, x, right parenthesis, equals, f, left parenthesis, x, right parenthesis, function f is an even function.
Note that all the terms of f are of an even degree.
### Example 2: $g(x)=5x^7-3x^3+x$g, left parenthesis, x, right parenthesis, equals, 5, x, start superscript, 7, end superscript, minus, 3, x, start superscript, 3, end superscript, plus, x
Again, we start by finding g, left parenthesis, minus, x, right parenthesis.
At this point, notice that each term in g, left parenthesis, minus, x, right parenthesis is the opposite of each term in g, left parenthesis, x, right parenthesis. In other words, g, left parenthesis, minus, x, right parenthesis, equals, minus, g, left parenthesis, x, right parenthesis, and so g is an odd function.
Note that all the terms of g are of an odd degree.
### Example 3: $h(x)=2x^4-7x^3$h, left parenthesis, x, right parenthesis, equals, 2, x, start superscript, 4, end superscript, minus, 7, x, start superscript, 3, end superscript
Let's find h, left parenthesis, minus, x, right parenthesis.
\begin{aligned}h(\blueD{-x})&=2(\blueD{-x})^4-7(\blueD{-x})^3\\ \\ &=2(x^4)-7(-x^3)&&\small{\gray{(-x)^4=x^4\text{ and } (-x)^3=-x^3}}\\ \\ &=2x^4+7x^3&&\small{\gray{\text{Simplify}}}\\\\ \end{aligned}
2, x, start superscript, 4, end superscript, plus, 7, x, start superscript, 3, end superscript is not the same as h, left parenthesis, x, right parenthesis nor is it the opposite of h, left parenthesis, x, right parenthesis.
Mathematically, h, left parenthesis, minus, x, right parenthesis, does not equal, h, left parenthesis, x, right parenthesis and h, left parenthesis, minus, x, right parenthesis, does not equal, minus, h, left parenthesis, x, right parenthesis, and so h is neither even nor odd.
Note that hhas one even-degree term and one odd-degree term.
### Concluding the investigation
In general, we can determine whether a polynomial is even, odd, or neither by examining each individual term.
empty spaceGeneral ruleExample polynomial
EvenA polynomial is even if each term is an even function.f, left parenthesis, x, right parenthesis, equals, 2, x, start superscript, 4, end superscript, minus, 3, x, start superscript, 2, end superscript, minus, 5
OddA polynomial is odd if each term is an odd function.g, left parenthesis, x, right parenthesis, equals, 5, x, start superscript, 7, end superscript, minus, 3, x, start superscript, 3, end superscript, plus, x
NeitherA polynomial is neither even nor odd if it is made up of both even and odd functions.h, left parenthesis, x, right parenthesis, equals, 2, x, start superscript, 4, end superscript, minus, 7, x, start superscript, 3, end superscript
### Check your understanding
3) Is f, left parenthesis, x, right parenthesis, equals, minus, 3, x, start superscript, 4, end superscript, minus, 7, x, start superscript, 2, end superscript, plus, 5 even, odd, or neither?
To start, let's decide if each term of f, left parenthesis, x, right parenthesis, equals, minus, 3, x, start superscript, 4, end superscript, minus, 7, x, start superscript, 2, end superscript, plus, 5 is an even or odd function.
• minus, 3, x, start superscript, start color purpleC, 4, end color purpleC, end superscript is an even function, since it's a monomial of start color purpleC, e, v, e, n, end color purpleC degree.
• minus, 7, x, start superscript, start color purpleC, 2, end color purpleC, end superscript is an even function, since it's a monomial of start color purpleC, e, v, e, n, end color purpleC degree.
• 5 is an even function since it is a monomial of start color purpleC, e, v, e, n, end color purpleC degree. Notice it can be written as 5, dot, x, start superscript, start color purpleC, 0, end color purpleC, end superscript.
Since each term in the polynomial function is itself an even function (even degree), the polynomial is also even.
We could also see this algebraically by showing that f, left parenthesis, minus, x, right parenthesis, equals, f, left parenthesis, x, right parenthesis, point
\begin{aligned} &\phantom{=}f(\blueD{-x}) \\\\ &=-3(\blueD{-x})^4-7(\blueD{-x})^2+5 \\\\ &=-3(x^4)-7(x^2)+5&&\small{\gray{(-x)^n=x^n\text{ for even }n}} \\\\ &=-3x^4-7x^2+5&&\small{\gray{\text{Simplify}}} \\\\ &=f(x) \end{aligned}
4) Is g, left parenthesis, x, right parenthesis, equals, 8, x, start superscript, 7, end superscript, minus, 6, x, start superscript, 3, end superscript, plus, x, start superscript, 2, end superscript even, odd, or neither?
To start, let's decide if each term of g, left parenthesis, x, right parenthesis, equals, 8, x, start superscript, 7, end superscript, minus, 6, x, start superscript, 3, end superscript, plus, x, start superscript, 2, end superscript is an even or odd function.
• 8, x, start superscript, start color greenD, 7, end color greenD, end superscript is an odd function, since it's a monomial of start color greenD, o, d, d, end color greenD degree.
• minus, 6, x, start superscript, start color greenD, 3, end color greenD, end superscript is an odd function, since it's a monomial of start color greenD, o, d, d, end color greenD degree.
• x, start superscript, 2, end superscript is an even function, since it is a monomial of start color purpleC, e, v, e, n, end color purpleC degree.
Since the terms in the polynomial are mixed (even and odd), then the polynomial is neither even nor odd.
We can also verify this algebraically. Notice that g, left parenthesis, minus, x, right parenthesis is neither g, left parenthesis, x, right parenthesis nor its opposite.
\begin{aligned}g(\blueD{-x})&=8(\blueD{-x})^7-6(\blueD{-x})^3+(\blueD{-x})^2\\ \\ &=8(-x^7)-6(-x^3)+x^2\\ \\ &=-8x^7+6x^3+x^2&&\small{\gray{\text{Simplify}}}\\\\ \end{aligned}
5) Is h, left parenthesis, x, right parenthesis, equals, 10, x, start superscript, 5, end superscript, plus, 2, x, start superscript, 3, end superscript, minus, x even, odd, or neither? |
# Complex numbers; trigonometric identity
Use the binomial expansion to find the real and imaginary parts of $(cosθ+isinθ)^5$ Hence show that $sin5θ/sinθ=16cos^4θ-12cos^2θ+1$
I expanded this expression and I got: $cos^5θ+5icos^4θsinθ-10cos^3θsin^2θ-10icos^2θsin^3θ+5cosθsin^4θ+isin^5θ$
Then I used the Moivre's theorem and I got: $(cos5θ+isin5θ)$
I compared the imaginary parts and I got something like: $sin5θ=5cos^4θsinθ-10cos^2θsin^3θ+sin^5θ$
which is very close to: $(16cos^4θ-12cos^2θ+1)sinθ$ but not the same.
Where do I make te mistake?
Thanks for any help! ;)
• Have you tried use $\cos^2\theta+sin^2\theta =1$
– Fan
May 15 '17 at 19:13
• You didn't err. You just didn't finish. May 15 '17 at 19:35
hint
In your last line, factor by $\sin (\theta),$
replace
$\sin^2(\theta)$ by $1-\cos^2 (\theta)$ and
$\sin^4 (\theta)$ by
$(1-\cos^2 (\theta))^2=1+\cos^4 (\theta)-2\cos^2 (\theta)$
you will get it. |
New Titles | FAQ | Keep Informed | Review Cart | Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education
Essentials of Stochastic Processes
Kiyosi Itô, Kyoto University, Japan
SEARCH THIS BOOK:
Translations of Mathematical Monographs
2006; 171 pp; hardcover
Volume: 231
ISBN-10: 0-8218-3898-9
ISBN-13: 978-0-8218-3898-3
List Price: US$75 Member Price: US$60
Order Code: MMONO/231
This book is an English translation of Kiyosi Itô's monograph published in Japanese in 1957. It gives a unified and comprehensive account of additive processes (or Lévy processes), stationary processes, and Markov processes, which constitute the three most important classes of stochastic processes. Written by one of the leading experts in the field, this volume presents to the reader lucid explanations of the fundamental concepts and basic results in each of these three major areas of the theory of stochastic processes.
With the requirements limited to an introductory graduate course on analysis (especially measure theory) and basic probability theory, this book is an excellent text for any graduate course on stochastic processes.
Kiyosi Itô is famous throughout the world for his work on stochastic integrals (including the Itô formula), but he has made substantial contributions to other areas of probability theory as well, such as additive processes, stationary processes, and Markov processes (especially diffusion processes), which are topics covered in this book. For his contributions and achievements, he has received, among others, the Wolf Prize, the Japan Academy Prize, and the Kyoto Prize.
Graduate students and research mathematicians interested in stochastic processes.
Reviews
"Written by one of the leading experts and founding fathers of the field, this volume presents to the reader lucid explanations of the fundamental concepts and basic results in each of the three major areas of the theory of stochastic processes."
-- Zentralblatt MATH
"Because o fits conciseness, clarity, and carefully chosen set of bibliographic references (added as a Postscript) it seems the ideal support for a course on stochastic processes."
-- Mathematical Reviews |
# Pearson correlation - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
Pearson correlation
One way ANOVA
Variable 1Independent/grouping variable
One quantitative of interval or ratio levelOne categorical with $I$ independent groups ($I \geqslant 2$)
Variable 2Dependent variable
One quantitative of interval or ratio levelOne quantitative of interval or ratio level
Null hypothesisNull hypothesis
H0: $\rho = \rho_0$
$\rho$ is the unknown Pearson correlation in the population, $\rho_0$ is the correlation in the population according to the null hypothesis (usually 0). The Pearson correlation is a measure for the strength and direction of the linear relationship between two variables of at least interval measurement level.
ANOVA $F$ test:
• H0: $\mu_1 = \mu_2 = \ldots = \mu_I$
$\mu_1$ is the population mean for group 1; $\mu_2$ is the population mean for group 2; $\mu_I$ is the population mean for group $I$
$t$ Test for contrast:
• H0: $\Psi = 0$
$\Psi$ is the population contrast, defined as $\Psi = \sum a_i\mu_i$. Here $\mu_i$ is the population mean for group $i$ and $a_i$ is the coefficient for $\mu_i$. The coefficients $a_i$ sum to 0.
$t$ Test multiple comparisons:
• H0: $\mu_g = \mu_h$
$\mu_g$ is the population mean for group $g$; $\mu_h$ is the population mean for group $h$
Alternative hypothesisAlternative hypothesis
H1 two sided: $\rho \neq \rho_0$
H1 right sided: $\rho > \rho_0$
H1 left sided: $\rho < \rho_0$
ANOVA $F$ test:
• H1: not all population means are equal
$t$ Test for contrast:
• H1 two sided: $\Psi \neq 0$
• H1 right sided: $\Psi > 0$
• H1 left sided: $\Psi < 0$
$t$ Test multiple comparisons:
• H1 - usually two sided: $\mu_g \neq \mu_h$
Assumptions of test for correlationAssumptions
• In the population, the two variables are jointly normally distributed (this covers the normality, homoscedasticity, and linearity assumptions)
• Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another
Note: these assumptions are only important for the significance test and confidence interval, not for the correlation coefficient itself. The correlation coefficient just measures the strength of the linear relationship between two variables.
• Within each population, the scores on the dependent variable are normally distributed
• The standard deviation of the scores on the dependent variable is the same in each of the populations: $\sigma_1 = \sigma_2 = \ldots = \sigma_I$
• Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2, $\ldots$, group $I$ sample is an independent SRS from population $I$. That is, within and between groups, observations are independent of one another
Test statisticTest statistic
Test statistic for testing H0: $\rho = 0$:
• $t = \dfrac{r \times \sqrt{N - 2}}{\sqrt{1 - r^2}}$
where $r$ is the sample correlation $r = \frac{1}{N - 1} \sum_{j}\Big(\frac{x_{j} - \bar{x}}{s_x} \Big) \Big(\frac{y_{j} - \bar{y}}{s_y} \Big)$ and $N$ is the sample size
Test statistic for testing values for $\rho$ other than $\rho = 0$:
• $z = \dfrac{r_{Fisher} - \rho_{0_{Fisher}}}{\sqrt{\dfrac{1}{N - 3}}}$
• $r_{Fisher} = \dfrac{1}{2} \times \log\Bigg(\dfrac{1 + r}{1 - r} \Bigg )$, where $r$ is the sample correlation
• $\rho_{0_{Fisher}} = \dfrac{1}{2} \times \log\Bigg( \dfrac{1 + \rho_0}{1 - \rho_0} \Bigg )$, where $\rho_0$ is the population correlation according to H0
ANOVA $F$ test:
• \begin{aligned}[t] F &= \dfrac{\sum\nolimits_{subjects} (\mbox{subject's group mean} - \mbox{overall mean})^2 / (I - 1)}{\sum\nolimits_{subjects} (\mbox{subject's score} - \mbox{its group mean})^2 / (N - I)}\\ &= \dfrac{\mbox{sum of squares between} / \mbox{degrees of freedom between}}{\mbox{sum of squares error} / \mbox{degrees of freedom error}}\\ &= \dfrac{\mbox{mean square between}}{\mbox{mean square error}} \end{aligned}
where $N$ is the total sample size, and $I$ is the number of groups.
Note: mean square between is also known as mean square model; mean square error is also known as mean square residual or mean square within
$t$ Test for contrast:
• $t = \dfrac{c}{s_p\sqrt{\sum \dfrac{a^2_i}{n_i}}}$
Here $c$ is the sample estimate of the population contrast $\Psi$: $c = \sum a_i\bar{y}_i$, with $\bar{y}_i$ the sample mean in group $i$. $s_p$ is the pooled standard deviation based on all the $I$ groups in the ANOVA, $a_i$ is the contrast coefficient for group $i$, and $n_i$ is the sample size of group $i$.
Note that if the contrast compares only two group means with each other, this $t$ statistic is very similar to the two sample $t$ statistic (assuming equal population standard deviations). In that case the only difference is that we now base the pooled standard deviation on all the $I$ groups, which affects the $t$ value if $I \geqslant 3$. It also affects the corresponding degrees of freedom.
$t$ Test multiple comparisons:
• $t = \dfrac{\bar{y}_g - \bar{y}_h}{s_p\sqrt{\dfrac{1}{n_g} + \dfrac{1}{n_h}}}$
$\bar{y}_g$ is the sample mean in group $g$, $\bar{y}_h$ is the sample mean in group $h$, $s_p$ is the pooled standard deviation based on all the $I$ groups in the ANOVA, $n_g$ is the sample size of group $g$, and $n_h$ is the sample size of group $h$.
Note that this $t$ statistic is very similar to the two sample $t$ statistic (assuming equal population standard deviations). The only difference is that we now base the pooled standard deviation on all the $I$ groups, which affects the $t$ value if $I \geqslant 3$. It also affects the corresponding degrees of freedom.
n.a.Pooled standard deviation
-\begin{aligned} s_p &= \sqrt{\dfrac{(n_1 - 1) \times s^2_1 + (n_2 - 1) \times s^2_2 + \ldots + (n_I - 1) \times s^2_I}{N - I}}\\ &= \sqrt{\dfrac{\sum\nolimits_{subjects} (\mbox{subject's score} - \mbox{its group mean})^2}{N - I}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned}
where $s^2_i$ is the variance in group $i$
Sampling distribution of $t$ and of $z$ if H0 were trueSampling distribution of $F$ and of $t$ if H0 were true
Sampling distribution of $t$:
• $t$ distribution with $N - 2$ degrees of freedom
Sampling distribution of $z$:
• Approximately the standard normal distribution
Sampling distribution of $F$:
• $F$ distribution with $I - 1$ (df between, numerator) and $N - I$ (df error, denominator) degrees of freedom
Sampling distribution of $t$:
• $t$ distribution with $N - I$ degrees of freedom
Significant?Significant?
$t$ Test two sided:
$t$ Test right sided:
$t$ Test left sided:
$z$ Test two sided:
$z$ Test right sided:
$z$ Test left sided:
$F$ test:
• Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or
• Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$ (e.g. .01 < $p$ < .025 when $F$ = 3.91, df between = 4, and df error = 20)
$t$ Test for contrast two sided:
$t$ Test for contrast right sided:
$t$ Test for contrast left sided:
$t$ Test multiple comparisons two sided:
• Check if $t$ observed in sample is at least as extreme as critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
• Find two sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
$t$ Test multiple comparisons right sided
• Check if $t$ observed in sample is equal to or larger than critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
• Find right sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
$t$ Test multiple comparisons left sided
• Check if $t$ observed in sample is equal to or smaller than critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
• Find left sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
Approximate $C$% confidence interval for \rho$$C\% confidence interval for \Psi, for \mu_g - \mu_h, and for \mu_i First compute approximate C% confidence interval for \rho_{Fisher}: • lower_{Fisher} = r_{Fisher} - z^* \times \sqrt{\dfrac{1}{N - 3}} • upper_{Fisher} = r_{Fisher} + z^* \times \sqrt{\dfrac{1}{N - 3}} where r_{Fisher} = \frac{1}{2} \times \log\Bigg(\dfrac{1 + r}{1 - r} \Bigg ) and z^* is the value under the normal curve with the area C / 100 between -z^* and z^* (e.g. z^* = 1.96 for a 95% confidence interval). Then transform back to get approximate C% confidence interval for \rho: • lower bound = \dfrac{e^{2 \times lower_{Fisher}} - 1}{e^{2 \times lower_{Fisher}} + 1} • upper bound = \dfrac{e^{2 \times upper_{Fisher}} - 1}{e^{2 \times upper_{Fisher}} + 1} Confidence interval for \Psi (contrast): • c \pm t^* \times s_p\sqrt{\sum \dfrac{a^2_i}{n_i}} where the critical value t^* is the value under the t_{N - I} distribution with the area C / 100 between -t^* and t^* (e.g. t^* = 2.086 for a 95% confidence interval when df = 20). Note that n_i is the sample size of group i, and N is the total sample size, based on all the I groups. Confidence interval for \mu_g - \mu_h (multiple comparisons): • (\bar{y}_g - \bar{y}_h) \pm t^{**} \times s_p\sqrt{\dfrac{1}{n_g} + \dfrac{1}{n_h}} where t^{**} depends upon C, degrees of freedom (N - I), and the multiple comparison procedure. If you do not want to apply a multiple comparison procedure, t^{**} = t^* = the value under the t_{N - I} distribution with the area C / 100 between -t^* and t^*. Note that n_g is the sample size of group g, n_h is the sample size of group h, and N is the total sample size, based on all the I groups. Confidence interval for single population mean \mu_i: • \bar{y}_i \pm t^* \times \dfrac{s_p}{\sqrt{n_i}} where \bar{y}_i is the sample mean for group i, n_i is the sample size for group i, and the critical value t^* is the value under the t_{N - I} distribution with the area C / 100 between -t^* and t^* (e.g. t^* = 2.086 for a 95% confidence interval when df = 20). Note that n_i is the sample size of group i, and N is the total sample size, based on all the I groups. Properties of the Pearson correlation coefficientEffect size • The Pearson correlation coefficient is a measure for the linear relationship between two quantitative variables. • The Pearson correlation coefficient squared reflects the proportion of variance explained in one variable by the other variable. • The Pearson correlation coefficient can take on values between -1 (perfect negative relationship) and 1 (perfect positive relationship). A value of 0 means no linear relationship. • The absolute size of the Pearson correlation coefficient is not affected by any linear transformation of the variables. However, the sign of the Pearson correlation will flip when the scores on one of the two variables are multiplied by a negative number (reversing the direction of measurement of that variable). For example: • the correlation between x and y is equivalent to the correlation between 3x + 5 and 2y - 6. • the absolute value of the correlation between x and y is equivalent to the absolute value of the correlation between -3x + 5 and 2y - 6. However, the signs of the two correlation coefficients will be in opposite directions, due to the multiplication of x by -3. • The Pearson correlation coefficient does not say anything about causality. • The Pearson correlation coefficient is sensitive to outliers. • Proportion variance explained \eta^2 and R^2: Proportion variance of the dependent variable y explained by the independent variable:$$ \begin{align} \eta^2 = R^2 &= \dfrac{\mbox{sum of squares between}}{\mbox{sum of squares total}} \end{align} $$Only in one way ANOVA \eta^2 = R^2. \eta^2 (and R^2) is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population. • Proportion variance explained \omega^2: Corrects for the positive bias in \eta^2 and is equal to:$$\omega^2 = \frac{\mbox{sum of squares between} - \mbox{df between} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}$$\omega^2 is a better estimate of the explained variance in the population than \eta^2. • Cohen's d: Standardized difference between the mean in group g and in group h:$$d_{g,h} = \frac{\bar{y}_g - \bar{y}_h}{s_p}$Indicates how many standard deviations$s_p$two sample means are removed from each other n.a.ANOVA table - Click the link for a step by step explanation of how to compute the sum of squares Equivalent toEquivalent to OLS regression with one independent variable: •$b_1 = r \times \frac{s_y}{s_x}$• Results significance test ($t$and$p$value) testing$H_0$:$\beta_1 = 0$are equivalent to results significance test testing$H_0$:$\rho = 0$OLS regression with one, categorical independent variable transformed into$I - 1$code variables: •$F$test ANOVA equivalent to$F$test regression model •$t$test for contrast$i$equivalent to$t$test for regression coefficient$\beta_i\$ (specific contrast tested depends on how the code variables are defined)
Example contextExample context
Is there a linear relationship between physical health and mental health?Is the average mental health score different between people from a low, moderate, and high economic class?
SPSSSPSS
Analyze > Correlate > Bivariate...
• Put your two variables in the box below Variables
Analyze > Compare Means > One-Way ANOVA...
• Put your dependent (quantitative) variable in the box below Dependent List and your independent (grouping) variable in the box below Factor
or
Analyze > General Linear Model > Univariate...
• Put your dependent (quantitative) variable in the box below Dependent Variable and your independent (grouping) variable in the box below Fixed Factor(s)
JamoviJamovi
Regression > Correlation Matrix
• Put your two variables in the white box at the right
• Under Correlation Coefficients, select Pearson (selected by default)
• Under Hypothesis, select your alternative hypothesis
ANOVA > ANOVA
• Put your dependent (quantitative) variable in the box below Dependent Variable and your independent (grouping) variable in the box below Fixed Factors
Practice questionsPractice questions |
# How do you find the vertical, horizontal and slant asymptotes of: (x^2-1)/(x^2+4)?
Apr 28, 2016
vertical asymptote: does not exist
horizontal asymptote: $f \left(x\right) = 1$
slant asymptote: does not exist
#### Explanation:
Finding the Vertical Asymptote
Given,
$f \left(x\right) = \frac{{x}^{2} - 1}{{x}^{2} + 4}$
Factor the numerator.
$f \left(x\right) = \frac{\left(x + 1\right) \left(x - 1\right)}{{x}^{2} + 4}$
Cancel out any factors that appear in the numerator and denominator. Since there aren't any, set the denominator equal to $0$ and solve for $x$.
${x}^{2} + 4 = 0$
${x}^{2} = - 4$
$x = \pm \sqrt{- 4}$
Since you can't take the square root of a negative number in the domain of real numbers, there is no vertical asymptote.
$\therefore$, the vertical asymptote does not exist.
Finding the Horizontal Asymptote
Given,
$f \left(x\right) = \frac{\textcolor{\mathrm{da} r k \mathmr{and} a n \ge}{1} {x}^{2} - 1}{\textcolor{p u r p \le}{1} {x}^{2} + 4}$
Divide the $\textcolor{\mathrm{da} r k \mathmr{and} a n \ge}{\text{leading coefficient}}$ of the leading term in the numerator by the $\textcolor{p u r p \le}{\text{leading coefficient}}$ of the leading term in the denominator.
$f \left(x\right) = \frac{\textcolor{\mathrm{da} r k \mathmr{and} a n \ge}{1}}{\textcolor{p u r p \le}{1}}$
$\textcolor{g r e e n}{| \overline{\underline{\textcolor{w h i t e}{\frac{a}{a}} \textcolor{b l a c k}{f \left(x\right) = 1} \textcolor{w h i t e}{\frac{a}{a}} |}}}$
Finding the Slant Asymptote
Given,
$f \left(x\right) = \frac{{x}^{2} - 1}{{x}^{2} + 4}$
There would be a slant asymptote if the degree of the leading term in the numerator was $1$ value larger than the degree of the leading term in the denominator. In your case, we see that the degree in both the numerator and denominator are equal.
$\therefore$, the slant asymptote does not exist. |
Searching....
# 12.19.3 ALMO-CIS/TDA with selected fragment occupied-virtual pairs
(July 14, 2022)
Q-Chem 6.0 and later versions support ALMO-CIS/TDA calculations with selected fragment occupied-virtual pairs, i.e., only excitation amplitudes that correspond to transitions between selected occupied and virtual orbitals are considered in Eq. (12.73). To run this type of calculations one needs to set ALMOCIS_FRAGOV $>0$, and currently three different modes are supported:
ALMOCIS_FRAGOV
ALMOCIS_FRAGOV
Doing ALMO-CIS/TDA calculations with transitions from occupied orbitals on the 1st fragment and virtuals in the full system
TYPE:
INTEGER
DEFAULT:
0
OPTIONS:
0 Doing standard ALMO-CIS/TDA calculations (if LOCAL_CIS $>0$) 1 Reading user-specified active fragment O-V pairs from the $frag_ov_pairs section 2 Excitations on the first fragment only 3 Excitations from the occupied orbitals on the first fragment to all virtuals in the system RECOMMENDATION: None The format of the$frag_ov_pairs section:
$frag_ov_pairs [number of frag_ov_pairs] [occ_frg_idx1] [vir_frg_idx1] [occ_frg_idx2] [vir_frg_idx2] ...$end
These modified ALMO-CIS/TDA models can be used to model excited states in complex environments, such as the local excitation of a chromophore in solution or its charge-transfer-to-solvent (CTTS) excitations. Note that the iterative Davidson algorithm is required for these calculations, i.e., EIGSLV_METH = 1.
Example 12.47 ALMO-TDA calculation for foramide water with user-specified occupied-virtual pairs: O(1) -> V(1) and O(1) -> V(2)
$molecule 0 1 -- 0 1 C 1.1508059365 0.2982718924 0.0240277739 O 0.3545181649 1.2334803420 -0.0015882208 N 0.8104369587 -1.0072797234 0.0043506838 H 2.2327270535 0.4686363261 0.0666232655 H -0.1675092286 -1.2596328526 -0.0352400180 H 1.5210524537 -1.7122494331 0.0139809901 -- 0 1 O -1.9693273428 -0.2999882700 -0.2293071572 H -1.3827632725 0.4697313642 -0.1375254289 H -2.7470364523 -0.0962178118 0.2907490329$end
$rem jobtype sp basis 6-31G* method pbe0 sym_ignore true symmetry false frgm_method stoll cis_n_roots 4 thresh 12 local_cis 1 almocis_fragov 1 eigslv_meth 1 ! iterative method$end
$frag_ov_pairs 2 1 1 1 2$end |
# How do you evaluate 12div4div2?
Mar 6, 2018
The answer is $1.5$.
#### Explanation:
Since all the operations in this expression are the same, start with the leftmost part and move to the right after every step.
It ends up looking like this:
$\textcolor{w h i t e}{=} 12 \div 4 \div 2$
$= \textcolor{red}{12 \div 4} \div 2$
$= \textcolor{red}{3} \div 2$
$= \textcolor{b l u e}{3 \div 2}$
$= \textcolor{b l u e}{1.5}$ |
# Logarithm Rules, Tables, Formulas and Shortcuts
Logarithm Solved Examples - Page 3
Logarithm Important Questions - Page 4
Logarithm Video Lecture - Page 5
Logarithm, in mathematics is the exponent or power to which a stated number called base , is raised to yield a specific number. For example on the expression $10^{2}\, =\, 100$ , the Logarithm of 100 to the base 10 is 2. This is written as $Log_{10}\, 100\, =\, 2$ Logarithms were originally invented to help simplify the arithmetical processes of multiplication, division, expansion to a power and extraction of a 'root', but they are now a days used for variety of purposes in pure and applied mathematics.
If for a positive real number (a ≠ 1) , $a^{m}\, =\, b$ , then the index m is called the Logarithm of b to the base a.
We write this as: $Log_{a}b\, m$
Log begins the abbreviation of the word ‘Logarithm’. Thus $a^{m}\, b\, \leftrightarrow \, log_{a}b\, =\, m$
Where $a^{m}$ = b is called the exponential form and $Log_{a}b\, =\, m$ is called the Logarithmic form.
Exponential Form
$3^{5}\, =\, 243$
$2^{4}\, =\, 16$
$3^{0}\, =\, 1$
$8^{\frac{1}{3}}\, =\, 2$
Logarithmic Form
$log_{3}243\, =\, 5$
$log_{2}16\, =\, 4$
$log_{3}1\, =\, 0$
$log_{8}2\, =\, \frac{1}{3}$
## Latest Jobs
1. Type
Job
UPSC ENGINEERING SERVICES EXAMINATION, 2020 Notification.
Location
Anywhere
Date Posted
7 Oct 2019
2. Type
Job
IBPS Clerk Recruitment 2019 - Last Date 09-10-2019
Location
Anywhere
Date Posted
3 Oct 2019
3. Type
Job
National Institute of Technology Kurukshetra Recruitment of Professor - Last Date 31-10-2019
Location
KurukshetraHaryana, India
Date Posted
3 Oct 2019
4. Type
Job
National Archives of India - School of Archival Studies 79th short term certificate course in "Care & Conservation of Books, Mss. & Archives"
Location
Anywhere
Date Posted
3 Oct 2019
5. Type
Job
Orissa Maritime Academy admission into Pre-sea General Purpose Rating course (Jan -Jun 2020)
Location
Date Posted
1 Oct 2019
6. Type
Job
RITES Recruitment of Site Inspector - Civil, E&M and CAD Operator.
Location
GurugramHaryana, India
Date Posted
1 Oct 2019
7. Type
Job
NIT Manipur Recruitment of Registrar on Deputation.
Location
ImphalManipur, India
Date Posted
1 Oct 2019
8. Type
Job
ICAR - Indian Agricultural Research Institute invited for walk-in-Interview for filling the temporary post of JRF/SRF/Project Assistant.
Location
DelhiDelhi, India
Date Posted
30 Sep 2019
9. Type
Job
FCI Recruitment of GENERAL/ DEPOT/ MOVEMENT/ ACCOUNTS/ TECHNICAL/ ENGINEERING and More.
Location
Anywhere
Date Posted
30 Sep 2019
10. Type |
## Sunday, April 22, 2012
### How do we find the Volume of pyramids and cones?
To find the volume of a pyramid ;
V= (1/3)B*H
Volume of a cone ;
V=(1/3) πr²h OR V= (1/3)BH
### How do you find the Surface Area and Volume of a sphere?
To find the Surface Area of a sphere you would have to use this formula ; $\!A = 4\pi r^2.$
To find the Volume of Sphere you would use this formula ; $\!V = \frac{4}{3}\pi r^3$
Find the surface area and area ;
## Tuesday, April 10, 2012
### how do we calculate Surface Area of a Cylinder?
To find the Surface Area of a cylinder you would use the formula
S.A.= L.A. + 2B
To find Lateral Area ( L.A.) you would use the formula
L.A.= 2πrh
One you get the lateral area you plug that number into the SA formula. The next step, you would have to find the area base of the base to plug it into the SA formula.
Area of base= πr2
### How do we identify Solids?
Solid Geometry
- Is the 3- dimensional space.
- There is 3 dimensions, width, depth, and height.
Solids of Properties
- Volume
- Surface Area (SA)
Types of Solids
- Polyhedra
- Non - Polyhedra
• Polyhedra: must have flat faces. Some shapes that are considered polyhedra are prisms, pyramids, and platonic solids.
• Non - Polyhedra: Only surfaces that are not flat. Shapes that are non-polyhedra are spheres, cylinders, cone, and torus.
## Sunday, March 25, 2012
### How do we find the area of a circle?
To find the area of a circle you would use the formula :
A =πr2
A is the area
r is the radius of the circle
WIth the given information you plug in the number(s) to the formula.
### How do we find the area of regular polygons?
To calculate the area of a regular polygon you would use the formula :
A = ½ * nas OR A= ½ * Pa
A is the area
P is the perimeter
a is the apothem
s is the length of each side
n is the number of sides
## Thursday, March 22, 2012
### how do we find the area of parallelograms, kites, and trapezoids?
When solving the area to a figure like parallelograms, kites,and trapezoids, each one of those shapes has their own formula.
Area of a Parallelogram:
Area of a Kite:
Area of a Trapezoid:
Example #1
Parallelogram Area = B x H
Area = 12 x 5
Area = 60 cm ²
Example #2
Kite Area = ½ d1d2
Area= ½ ( 8 x 6)
Area= ½ (48)
Area = 24 cm ²
Example #3
Trapezoid Area = ½ h( b1 +b2 )
Area = ½ 5(10 + 14)
Area = ½ (120)
Area = 60in²
### How do we calculate the area of rectangles and triangles?
Area- The total amount of units inside of a figure / shape.
When trying to solve the area of a triangle and a rectangle theres a formula to each shape.
Formula of a Triangle:
Formula of a Rectangle:
Example #1:
Find the area of a triangle with base of 5in. and the height of 8in.
A= ½b × h
A=½ (5) × (8)
A= ½ × 40
A= 20 in²
Example #2:
Find the area of a rectangle with a base of 2cm. and height of 9cm.
A= b × h
A= (2)×(9)
A= 18 cm²
## Monday, March 12, 2012
### How do we solve compound loci problems ?
When solving a compound locus problem, always involves two or more locus conditions in the same problem.
To know that there are more that one locus condition you would be able to identify it by seeing each one seperated by the words " AND " or " AND ALSO"
To solve the two or more, locus conditions in the same problem you have to solve each one seperately but on the same graph diagram.
### How do we find the locus of points?
• A locus is a general graph of a given equation
• The locus is the set of all points that makes all the other points the same to the given condition
• There are 5 different locus
1. The locus of points equidistant ( the equal distance from another point)
from a single point.
• Using the origin and forming a circle at the same distant all around the center( origin)
The locus of 1 unit from point A.
2. The locus of points equidistant from two fixed points.
• Forming a line through the middle of the two points
The locus of points P and Q is :
3. The locus of points from a single line.
• Two parallel lines would be equidistant formed on opposite side from the original line
4. Locus of points equidistant from two parallel lines.
• a line would be through the middle of the two lines.
5. The locus of points from two intersecting lines.
• two intersecting lines halfway between the two original lines.
## Sunday, March 4, 2012
### How do we solve logic problems using conditionals?
When making a conditional there is a rule we have to remember
which is ,
* If hypotenuse then conclusion
Example:
* If the light is red, then the car will stop
* If it is not raining, then i will take my umbrella
When sovling the conditional to a inverse you have to,
* if not hypotenuse then not conclusion.
Example:
* conditional- if i walk all day then i am tired
* inverse- if i do not walk all day then i am not tired
Solving a conditional to a converse,
*switch the hypotenuse and the conclusion.
Example:
* conditional - if i walk all day then i am tired
* inverse- if i am tired then i walk all day
Solving a conditional to a contrapositive (logical equivalent) follow this rule,
* if not conclusion then not Hypotenuse
Example:
* concditional- if i walk all day then i am tired
* contrapositive- if i am not tired, then i did not walk all day
## Saturday, March 3, 2012
### what is a mathematical statement?
what is a mathematical statement?
A mathematical statement is a statement that can be proven true or false.
This probably a everyday thing.
An example of a mathematical statement would be;
The principle of CPEHS is Mr. Lieberman and a teacher in CPEHS is Mr. Schnatterly.
** THE WORD AND SHOWS THAT THIS STATEMENT MUST BE TRUE FOR THE STATEMENT TO BE TRUE.
There are 4 different statements that can be formed ;
- the conditional
- inverse
- converse
- contrapositive or logical equivalent
The conditional;
If I use a pink pen, then I am lucky.
The inverse;
If I am not using a pink pen, then I am not lucky, .
The converse ;
If I am lucky, then I am using a pink pen.
The comtrapositive;
If I am not lucky, then I am not using a pink pen.
## Monday, February 20, 2012
### How do we graph Rotations?
1. Know the angle of rotation.
2. Know the direction (either it is clockwise or counterclockwise)
3. Use the formula of the given angle to each point.
90 degree rotation
(x,y) → (-y,x)
-90 degree rotation
(x,y) → (-y,x)
180 degree rotation
(x,y) → (-x,-y)
270 degree rotation
(x,y) → (y,-x)
### How do we use the other definitions of transformations?
Glide reflection- its a reflection of a figure in a lince and a translation along that line.
Orientaion- The arrangments of points.
Isometry- When the image of the LENGTH and the SIZE stays the same after the transformation to the original shape.
Direct Isometry- when the orientaion of the letters stay the same and it's length.
Opposite Isometry- The letter points of the shape, is backwards on the image but the length are the same. Just like a reflection.
## Saturday, February 11, 2012
### How do we graph dilations?
- Dilation is one of the four transformations that causes an image to stretch or shrinks using it's scale factor, to it's original size. * The description of A dilation usually includes the scale factor Or the ratio. * With the scale factor, you have to multiply the dimensions of the original To get the answer of the dilated image.
## Monday, February 6, 2012
### How do we identify transformations ?
A transformation is when you move a geometric figure. Including translation, rotation, reflection, and dialtion.
• Translation- Every point is moved the same distance in the same direction.
• Reflection- figure is flipped over a line of symmetry.
• Rotation- Figure is turned around in one point.
• Dialtion- An enlargment or reduction in size of the image. |