text
stringlengths 100
957k
| meta
stringclasses 1
value |
---|---|
# Trapped in the gender stereotype? The image of science among secondary school students and teachers
Makarova, Elena and Herzog, Walter. (2015) Trapped in the gender stereotype? The image of science among secondary school students and teachers. Equality, Diversity and Inclusion: An International Journal, 34 (2). pp. 106-123.
Full text not available from this repository.
Official URL: https://edoc.unibas.ch/76288/ | {} |
# How to Visualize a Decision Tree from a Random Forest in Python using Scikit-Learn
Here’s the complete code: just copy and paste into a Jupyter Notebook or Python script, replace with your data and run:
The final result is a complete decision tree as an image.
### Explanation of code
1. Create a model train and extract: we could use a single decision tree, but since I often employ the
random forest for modeling it’s used in this example. (The trees will be slightly different from one another!).
2. Export Tree as .dot File: This makes use of the export_graphviz function in Scikit-Learn. There are many parameters here that control the look and information displayed. Take a look at the documentation for specifics.
3. Convert dot to png using a system command: running system commands in Python can be handy for carrying out simple tasks. This requires installation of graphviz which includes the dot utility. For the complete options for conversion, take a look at the documentation.
4. Visualize: the best visualizations appear in the Jupyter Notebook. (Equivalently you can use matplotlib to show images). | {} |
Numerical Analysis of Cold Storage System with Array of Solid-Liquid Phase Change Module
Title & Authors
Numerical Analysis of Cold Storage System with Array of Solid-Liquid Phase Change Module
Mun, Soo-Beom;
Abstract
This paper is the fundamental study for the application of cold storage system to the transportation equipment by sea and land. This numerical study presents the solid-liquid phase change phenomenon of calcium chloride solution of 30wt %. The governing equations are 1-dimensional unsteady state heat transfer equations of $\small{1^{st}}$ order partial differential equations. This type of latent heat storage material is often usable in fishery vessel for controlling the temperature of container with constant condition. The governing equation was discretized with finite difference method and the program was composed with Mathcad program. The main parameters of this solution were the initial temperature of heat storage material, ambient temperature of cold air and the velocity of cold air. The data of boundary layer thickness becomes thin with the increasing of cold air flowing velocity and also the heat storage completion time become shorten.
Keywords
Numerical analysis;Cold storage system;Solid-liquid phase change;Finite difference method;1-dimensional unsteady state equation;
Language
Korean
Cited by
References
1.
Azzouz, K., D. Leducq and D. Gobin(2009), Enhancing the performance of household refrigerators with latent heat storage : an experimental investigation, International Journal of Refrigerator, Vol. 32, pp. 1634-1644.
2.
Azzouz, K., D. Leducq and D. Gobin(2008), Performance enhancement of a household refrigerator by addition of latent heat storage, International Journal of Refrigerator, Vol. 31, pp. 892-901.
3.
Dincer, I. and M. A. Rosen(2001), Energetic, environmental and economic aspects of thermal energy storage systems for cooling capacity, Apply Thermal Engineering, Vol. 21, pp. 1105-1117.
4.
Erek, A. and M. A. Ezan(2007), Experimental and numerical study on charging process of an ice-on-coil thermal energy storage system, International Journal of Energy Resource, Vol. 31, pp. 158-176.
5.
Hasnain, S. M.(1998), Review on sustainable thermal energy storage technologies. Part : cool thermal storage, Energy Conversion Management, Vol. 39, pp. 1139-1153.
6.
Jun, Y. H., J. H. Kim, J. H. Moon and S. R. Lee(2015), Development and Field Test of the PCM Cold Storage System to Apply Nighttime Electric Power to Refrigeration Warehouse, The Society of Air Conditioning and Refrigerating Engineers of Korea Seasonal Conference(Summer), pp. 751-754.
7.
Mehling, H. and L. F. Cabeza(2008), Heat and cold storage with PCM, An up to data introduction into basics and applications, Springers-Verlag Berlin Heidelberg. pp. 11-55.
8.
Melone, L., L. Altomare, A. Cigada and L. De Nardo(2012), Phase change material cellulosic composites for the cold storage of perishable products: from material preparation to computational evaluation, Apply Energy, Vol. 89, pp. 339-346.
9.
Onyejekwe, D.(1989), Cold storage using eutectic mixture of NaCl/H2O : an application to photovoltaic compressor vapours freezers, Solar Wind Technology, Vol. 6, pp. 11-18. | {} |
Bug 1730377 - fix sss_cache to also reset cached timestamp
fix sss_cache to also reset cached timestamp
Keywords: Status: Triaged CLOSED WONTFIX None Red Hat Enterprise Linux 7 Red Hat sssd --- 7.6 x86_64 Linux medium medium rc Target Release: --- Sumit Bose sssd-qe sync-to-jira depends on / blocked
Reported: 2019-07-16 14:45 UTC by Paul Raines 2021-01-15 15:37 UTC (History) 14 users (show) ademir.ladeira aheverle atikhono dlavu grajaiya jhrozek lslebodn millard.matt mzidek pbrezina pkettman rpage thalman tscherf If docs needed, set a value 2021-01-15 11:46:22 UTC
System ID Private Priority Status Summary Last Updated
Github SSSD sssd issues 4872 0 None open Silent cache corruption and entries not refreshing 2021-02-08 11:34:27 UTC
Paul Raines 2019-07-16 14:45:36 UTC Description of problem: Changes to the LDAP server Group database will not propagate to some sssd clients using that LDAP server. Even running sss_cache -E will not fix it. Only shutting down sssd, removing the cache_default.ldb and timestamps_default.ldb files from /var/lib/sss/db works, and restarting sssd works. Version-Release number of selected component (if applicable): sssd-1.16.2-13.el7_6.8.x86_64 How reproducible: Very random Steps to Reproduce: 1. Make a change to group entry in LDAP 2. Run 'ssh_cache -E' on clients 3. Check with 'getent group' on clients to see if correct Actual results: Group entry did not change to match LDAP server Expected results: Group entry should change to match LDAP server Additional info: Upstream issue at https://pagure.io/SSSD/sssd/issue/3886 This is a screen capture showing the issue: [root@hound db]# getent group stroke stroke:*:1021:judith [root@hound db]# grep ldap4 /etc/sssd/sssd.conf ldap_uri = ldap://ldap4.mydomain.org, ldap://ldap5.mydomain.org [root@hound db]# ldapsearch -h ldap4 -x -b 'ou=Group,dc=mydomain,dc=org' "(cn=st roke)" | grep memberUid memberUid: judith memberUid: marco memberUid: bgh12 [root@hound db]# sss_cache -G [root@hound db]# sss_cache -E [root@hound db]# getent group stroke stroke:*:1021:judith [root@hound db]# systemctl stop sssd [root@hound db]# \rm cache_default.ldb timestamps_default.ldb [root@hound db]# systemctl start sssd [root@hound db]# getent group stroke stroke:*:1021:judith,marco,bgh12 Sumit Bose 2020-11-24 16:19:44 UTC Hi, I tried to reproduce the issue as it was described in the upstream tickets https://pagure.io/SSSD/sssd/issue/3886 and https://pagure.io/SSSD/sssd/issue/3869 but was not successful. Then I checked again the logs from the upstream tickets and would say that there might have been an issue on the server side which prevented the timestamp-cache logic to update data cache. The timestamp in the 'Adding original mod-Timestamp' debug messages of the groups in question are typically weeks older than the data timestamp of the log entries. So my current best guess is the timestamp on the server side was not updated for whatever reasons (I found some bug reports about such issue) and as a result SSSD thinks that there is no change and no update is needed. Some logs in the upstream tickets and from the attached cases show issue with missing timestamp cache entries (https://github.com/SSSD/sssd/issues/5121) which was recently fixed by Tomas. I was not able to reproduce the observed behavior by selectively removing timestamp entries of the objects involved. About the attached cases in general, the main issue in the cases was a different one and looks resolved. I doubt that any of the cases really has the issue are reported in the upstream tickets. As a result, I was not able to find an issue in SSSD with the data available. However, given that there might be cases where the server side timestamp might by out of sync, it might be worth to think about resetting the cached timestamp with sss_cache as well so that the object must really be read from the server and writing to the cache cannot be skip? If we decide that this is a good idea we have to decide as well if this is something we want to have in RHEL-7. bye, Sumit Raymond Page 2021-01-15 15:27:48 UTC Resolution: --- → WONTFIX ^^ This type of resolution without customer interaction will directly inform my recommendations to leadership regarding RH solutions. Specifically, the inability to reproduce is not evidence contrary to the existence of an issue, it is evidence the technical lead is incapable of reproducing the issue. If support personnel are not capable of reproducing customer issues, then the support agreements become worthless and calls into question the technical value of RH solutions. Alexey Tikhonov 2021-01-15 15:37:48 UTC (In reply to Raymond Page from comment #9) > Resolution: --- → WONTFIX > > ^^ This type of resolution without customer interaction will directly inform > my recommendations to leadership regarding RH solutions. > Specifically, the inability to reproduce is not evidence contrary to the > existence of an issue, it is evidence the technical lead is incapable of > reproducing the issue. > If support personnel are not capable of reproducing customer issues, then > the support agreements become worthless and calls into question the > technical value of RH solutions. Please, read explanation in the comment 4 about defined scope of the issue. Taking into account status of RHEL7, this scope can't be addressed here and the issue will be tracked in RHEL8 bz 1902280. Sorry for not making this comment public initially. If there are any additional details available that are missing on engineering side, please work with your support contacts directly.
Note You need to log in before you can comment on or make changes to this bug. | {} |
Find the value of x.
Question:
Find the value of x.
As one of the key management functions, leading focuses on a manager's efforts to: a. communicate with employees b. motivate the workforce c. guide employees' efforts d. stimulate high performance e. all of the above
As one of the key management functions, leading focuses on a manager's efforts to: a. communicate with employees b. motivate the workforce c. guide employees' efforts d. stimulate high performance e. all of the above...
The reaction between between common salt and concentrated tetraoxosulphate(vi) acid will liberate A. sulphur (iv) oxide B. oxygen and chloride C. Hydrogen chloride gas D. Hydrogen sulphide gas
The reaction between between common salt and concentrated tetraoxosulphate(vi) acid will liberate A. sulphur (iv) oxide B. oxygen and chloride C. Hydrogen chloride gas D. Hydrogen sulphide gas ...
Mon petit frère a peur (fear) du pluie nuages tonnerre soleil
Mon petit frère a peur (fear) du pluie nuages tonnerre soleil...
I need help with geometry.
I need help with geometry....
Zhz-kjhw-sdy only study
zhz-kjhw-sdy only study...
Hhhhhhheeeeeeeelllllp
hhhhhhheeeeeeeelllllp...
If the square root of p2 is an integer greater than 1, which of the following must be true? I. p2 has an odd number of positive factors II. p2 can be expressed as the product of an even number of positive prime factors III. p has an even number of positive factors?
If the square root of p2 is an integer greater than 1, which of the following must be true? I. p2 has an odd number of positive factors II. p2 can be expressed as the product of an even number of positive prime factors III. p has an even number of positive factors?...
What is the slope-intercept form of 4x+5y=10
what is the slope-intercept form of 4x+5y=10...
*PLEASE ANSWER!! I DONT GET IT* George gets the opportunity to make overtime pay at the end of every month. What is overtime? a.) A rate of pay for hours worked exceeding 40 hours b.) A rate of pay for working under 40 hours. c.) Deductions
*PLEASE ANSWER!! I DONT GET IT* George gets the opportunity to make overtime pay at the end of every month. What is overtime? a.) A rate of pay for hours worked exceeding 40 hours b.) A rate of pay for working under 40 hours. c.) Deductions...
What is 16 out of 24 as a grade
What is 16 out of 24 as a grade...
IF ANYONE HELPS AND GETS IT RIGHT, WILL MARK BRAINLIST!!!!! 1.) President Obama's concern in his speech regarding Libya was: A.) maintaining a relationship with the Libyan government. B.) preserving Libyan democracy. C.) protecting basic human rights. D.) overthrowing the Libyan government. 2.) A presidential inaugural address takes place: A.) at the end of the president's term. B.) at the beginning of the president's term. C.) once a year. D.) the night a new president wins the election.
IF ANYONE HELPS AND GETS IT RIGHT, WILL MARK BRAINLIST!!!!! 1.) President Obama's concern in his speech regarding Libya was: A.) maintaining a relationship with the Libyan government. B.) preserving Libyan democracy. C.) protecting basic human rights. D.) overthrowing the Libyan government. 2.) A pr...
What is the study of acid-base chemistry called in the environment
what is the study of acid-base chemistry called in the environment...
As part of the war on terror, what action did the United States take in Afghanistan? It captured Osama bin Laden. It jailed all members of al-Qaeda. It told Americans to leave Afghanistan. It drove the Taliban out of powe
As part of the war on terror, what action did the United States take in Afghanistan? It captured Osama bin Laden. It jailed all members of al-Qaeda. It told Americans to leave Afghanistan. It drove the Taliban out of powe...
10. Solve the system using substitution. (2 points) 3x - y = -1 x - y = -3 What is the solution for y? -4 - 1 1 4
10. Solve the system using substitution. (2 points) 3x - y = -1 x - y = -3 What is the solution for y? -4 - 1 1 4...
My heart: I not going anywhere but here because you are my dreams you are my everything, I can imagine waking with you and me through my great times and my tough times They are all with you but all my worries and all my hopes are with you want it you got it You like my old jeans but I am just thinking of all the things we have done Just you and me let our dreams come true, just let us fly away with you.
My heart: I not going anywhere but here because you are my dreams you are my everything, I can imagine waking with you and me through my great times and my tough times They are all with you but all my worries and all my hopes are with you want it you got it You like my old jeans but I am just thinki...
What pollution comes from many small sources
What pollution comes from many small sources...
There are about 25.4 millimeters in one inch. Write this number in scientific notation.
There are about 25.4 millimeters in one inch. Write this number in scientific notation.... | {} |
StatisticItem¶
StatisticItem
Represents a single statistic item (as found in Gathered Information)
Object Properties:
• type (string) – set to “StatisticItem”
• name (string) – The name of this item
• value (string) – The value of this item | {} |
## Project description
Calf lets you remove all your command argument parsing code, at least for simple cases. Only the implementation function is left, with initialization code that uses calf to call this function. The command argument parser is configured with a proper docstring, and perhaps some annotations (argument type) and default values for the parameters. In other words, stuffs that you would write anyway.
The docstring can be written in Google, Sphinx, epydoc or Numpy style, and the design is that it is easy to swap the parsing function with yours. In fact, you can customize such a wide range of characteristics of calf, that you can treat it as a slightly restricted frontend to the ArgumentParser under the hood. Used in this way, you can treat calf as a cute way to configure argparse.
This package shamelessly stole a lot of ideas from plac, but hopes to be more focused on creating comfortable command line interfaces rather than becoming a Swiss knife for programs with text-only user interface.
## Basic example
Hello-world looks like this:
def hello(name) -> None:
"""Say hello
Args:
name: name of to say hello to
"""
print('Hello,', name)
if __name__ == '__main__':
import calf
calf.call(hello)
The first thing to notice is that the program uses Google docstring style. If you want to use another style, just add doc_parser=<parser> to calf.call. Here <parser> may be calf.google_doc_parser, calf.sphinx_doc_parser (for Sphinx or Epydoc) or calf.numpy_doc_parser. You can run this program with:
hello.py Isaac
Here name is a positional command line argument: a normal function argument always maps to a positional command line argument. If you want an option instead, you can replace the function argument like this:
def hello(*, name: str = 'Isaac') -> None:
"""Say hello
Args:
name: (-n) name of to say hello to
"""
print('Hello,', name)
Then the program is run like one of the following:
hello.py
hello.py --name Cathy
hello.py -n Cathy
Now name is an option: a keyword-only function argument always maps to a function. In this version we are explicit about the type of the argument. Note also that the leading -n in the docstring describing the argument, enclosed in parentheses, becomes the short option name.
It is usually a good idea to allow options not to be specified, by providing a default value. Positional arguments can also be provided a default value, but it doesn't mix well with variable arguments described below.
It is also possible to specify a default which provides no value (so the program knows that no value is provide). This is done by either using a default value of None, or setting in parameter a type of a parameterized Typing.Optional (without setting a default). In this case the normal construction of the target type will not happen.
There is a special case: any boolean function argument becomes a default-off flag. I cannot find a natural way to have a default-on flag, so it is not provided. (Let me know if you think otherwise!)
Variable arguments and keyword arguments can also be used. Variable arguments will become a list of the specified type:
def do_sum(*arg: int) -> None:
"""Sum numbers"""
print('Sum =', sum(arg, 0))
Here the argument type is "int". The string passed in the command line argument will be converted to this type, and in the help message there will be a little hint (looking like "[int]") indicating the needed type. Also note that in this example I don't add documentation for the arguments: the docstring information is optional, without them there is no help string but everything else still works.
Keyword arguments cause command line arguments like "<name>=<value>" to be stolen from the var-arg and form a map. A type can still be provided. For example, if you have:
import urllib.parse
def get_query_str(url, **item) -> None:
"Create URL with parameters"
qstr = urllib.parse.urlencode(item)
if qstr:
url += '?' + qstr
print(url)
Then you can run something like
get_query_str.py http://a/b x=a=c y=/
to get http://a/b?y=%2F&x=a%3Dc.
Finally, if you're tired of writing initialization code, you have an additional option to directly place your module under your PYTHONPATH. Then you can run your program simply like
calf hello.hello -n Isaac
You can have your function to accept other types. Calf normally uses one positional argument or option for each function argument, and whatever string you specified in the argument will be passed to the type you specified (via default argument or annotation) as constructor. In cases that passing the string to the type constructor doesn't do the right thing (e.g., datetime), you can create your own conversion function and add it to calf.CONVERTERS. This has been done for datetime.date, datetime.time and datetime.datetime, and you can change how they behave by modifying calf.CONVERTERS (see the nextday.py example in the docs directory).
But you can also extend calf by creating a subclass of "selector" which selects function arguments based on name and type. It then specifies how to create a "loader" to handle the function argument, which may use multiple command line arguments (or do any other interaction with the ArgumentParser). See composite.py in the docs directory to see how this is done, for the common case.
Other parts of the module can also be overridden. For example, you can change the docstring parser and parameter doc parser. See the design document in the docs directory to understand the design and do all sorts of things with calf.
## Project details
Uploaded source
Uploaded py3 | {} |
# Savonius wind turbine
Savonius wind turbine
Savonius wind turbines are a type of vertical-axis wind turbine (VAWT), used for converting the force of the wind into torque on a rotating shaft. The turbine consists of a number of aerofoils, usually—but not always—vertically mounted on a rotating shaft or framework, either ground stationed or tethered in airborne systems.
## Origin
The Savonius wind turbine was invented by the Finnish engineer Sigurd Johannes Savonius in 1922. However, Europeans had been experimenting with curved blades on vertical wind turbines for many decades before this. The earliest mention is by the Italian Bishop of Czanad, Fausto Veranzio, who was also an engineer. He wrote in his 1616 book Machinae novae about several vertical axis wind turbines with curved or V-shaped blades. None of his or any other earlier examples reached the state of development made by Savonius. In his Finnish biography there is mention of his intention to develop a turbine-type similar to the Flettner-type, but autorotationary. He experimented with his rotor on small rowing vessels on lakes in his country. No results of his particular investigations are known, but the Magnus-Effect is confirmed by König.[1] The two Savonius patents: US1697574, filed in 1925 by Sigurd Johannes Savonius, and US1766765, filed in 1928.
## Operation
Schematic drawing of a two-scoop Savonius turbine
The Savonius turbine is one of the simplest turbines. Aerodynamically, it is a drag-type device, consisting of two or three scoops. Looking down on the rotor from above, a two-scoop machine would look like an "S" shape in cross section. Because of the curvature, the scoops experience less drag when moving against the wind than when moving with the wind. The differential drag causes the Savonius turbine to spin. Because they are drag-type devices, Savonius turbines extract much less of the wind's power than other similarly-sized lift-type turbines. Much of the swept area of a Savonius rotor may be near the ground, if it has a small mount without an extended post, making the overall energy extraction less effective due to the lower wind speeds found at lower heights.
## Power and rotational speed
The maximum power of a Savonius rotor is given by ${\displaystyle P_{\mathrm {max} }=0.36\,\mathrm {kg\,m^{-3}} \cdot h\cdot r\cdot v^{3}}$, where ${\displaystyle h}$ and ${\displaystyle r}$ are the height and radius of the rotor and ${\displaystyle v}$ is the wind speed.[citation needed]
The angular frequency of a rotor is given by ${\displaystyle \omega ={\frac {\lambda \cdot v}{r}}}$, where ${\displaystyle \lambda }$ is a dimensionless factor called the tip-speed ratio. The range λ varies within is characteristic of a specific windmill, and for a Savonius rotor λ is typically around ≈1.
For example, an oil-barrel sized Savonius rotor with h=1 m and r=0.5 m under a wind of v=10 m/s, will generate a maximum power of 180 W and an angular speed of 20 rad/s (190 revolutions per minute).
## Use
Combined Darrieus-Savonius generator in Taiwan
Savonius turbines are used whenever cost or reliability is much more important than efficiency.
Most anemometers are Savonius turbines for this reason, as efficiency is irrelevant to the application of measuring wind speed. Much larger Savonius turbines have been used to generate electric power on deep-water buoys, which need small amounts of power and get very little maintenance. Design is simplified because, unlike with horizontal axis wind turbines (HAWTs), no pointing mechanism is required to allow for shifting wind direction and the turbine is self-starting. Savonius and other vertical-axis machines are good at pumping water and other high torque, low rpm applications and are not usually connected to electric power grids. In the early 1980s Risto Joutsiniemi developed a helical rotor (wiki:fi) version that does not require end plates, has a smoother torque profile and is self-starting in the same way a crossed pair of straight rotors is.
The most ubiquitous application of the Savonius wind turbine is the Flettner Ventilator, which is commonly seen on the roofs of vans and buses and is used as a cooling device. The ventilator was developed by the German aircraft engineer Anton Flettner in the 1920s. It uses the Savonius wind turbine to drive an extractor fan. The vents are still manufactured in the UK by Flettner Ventilator Limited.[2]
Small Savonius wind turbines are sometimes seen used as advertising signs where the rotation helps to draw attention to the item advertised. They sometimes feature a simple two-frame animation.
## Tethered airborne Savonius turbines
• Airborne wind turbines
• Kite types
• When the Savonius rotor axis is set horizontally and tethered, then kiting results. There are scores of patents and products that use the net lift Magnus-effect that occurs in the autorotation of the Savonius rotor. The spin may be mined for some of its energy for making noise, heat, or electricity.
## References
1. ^ Felix van König (1978). Windenergie in praktischer Nutzung. Pfriemer. ISBN 3-7906-0077-6.
2. ^ Flettner | {} |
F. Making It Bipartite
time limit per test
4 seconds
memory limit per test
512 megabytes
input
standard input
output
standard output
You are given an undirected graph of $n$ vertices indexed from $1$ to $n$, where vertex $i$ has a value $a_i$ assigned to it and all values $a_i$ are different. There is an edge between two vertices $u$ and $v$ if either $a_u$ divides $a_v$ or $a_v$ divides $a_u$.
Find the minimum number of vertices to remove such that the remaining graph is bipartite, when you remove a vertex you remove all the edges incident to it.
Input
The input consists of multiple test cases. The first line contains a single integer $t$ ($1 \le t \le 10^4$) — the number of test cases. Description of the test cases follows.
The first line of each test case contains a single integer $n$ ($1 \le n \le 5\cdot10^4$) — the number of vertices in the graph.
The second line of each test case contains $n$ integers, the $i$-th of them is the value $a_i$ ($1 \le a_i \le 5\cdot10^4$) assigned to the $i$-th vertex, all values $a_i$ are different.
It is guaranteed that the sum of $n$ over all test cases does not exceed $5\cdot10^4$.
Output
For each test case print a single integer — the minimum number of vertices to remove such that the remaining graph is bipartite.
Example
Input
4
4
8 4 2 1
4
30 2 3 5
5
12 4 6 2 3
10
85 195 5 39 3 13 266 154 14 2
Output
2
0
1
2
Note
In the first test case if we remove the vertices with values $1$ and $2$ we will obtain a bipartite graph, so the answer is $2$, it is impossible to remove less than $2$ vertices and still obtain a bipartite graph.
Before After
test case #1
In the second test case we do not have to remove any vertex because the graph is already bipartite, so the answer is $0$.
Before After
test case #2
In the third test case we only have to remove the vertex with value $12$, so the answer is $1$.
Before After
test case #3
In the fourth test case we remove the vertices with values $2$ and $195$, so the answer is $2$.
Before After
test case #4 | {} |
I've raised this topic a couple of times here. Several years ago, Groves and Heeringa (2006) proposed an approach to survey data collection that they called "Responsive Design." The design was rolled out in phases with information from prior phases being used to tailor the design in later phases.
In my dissertation, I wrote about "Adaptive Survey Design." For me, the main point of using the term "adaptive" was to link to the research on adaptive treatment regimes, especially as proposed by Susan Murphy and her colleagues.
I hadn't thought much about the relationship between the two. At the time, I saw what I was doing as a subset of responsive designs.
Since then, Barry Schouten and Melania Calinescu at Statistics Netherlands have defined "adaptive static" and "adaptive dynamic" designs. Adaptive static designs tailor the protocol to information on the sampling frame. For example, determining the mode of contact for each case by its characteristics on the frame, like age. Adaptive dynamic designs tailor the design to incoming paradata. A refusal conversion protocol might be a commonly used example. Changing incentives based on paradata might be another example. The "adaptive dynamic" designs seem to come closest to the kind of designs I envisioned when writing my dissertation.
Over the summer, Mick Couper and I gave a talk on responsive designs. We included some definitional discussion. It was Mick's idea to describe these designs along a continuum. The dimension of the continuum involves how much tailoring there is. On one end, single protocol surveys apply the same protocol to every case. On the other end of the spectrum, adaptive treatment regimes provide individually-tailored protocols. Here's a graphic:
The definitions of these various terms may still be fluid. The important thing is that folks who are working on similar things be able to communicate and build upon each others results.
1. Thank you for posting such a useful, impressive and looking good!
2. Thanks for the post, it's really interesting and helpful :)
### "Responsive Design" and "Adaptive Design"
My dissertation was entitled "Adaptive Survey Design to Reduce Nonresponse Bias." I had been working for several years on "responsive designs" before that. As I was preparing my dissertation, I really saw "adaptive" design as a subset of responsive design.
Since then, I've seen both terms used in different places. As both terms are relatively new, there is likely to be confusion about the meanings. I thought I might offer my understanding of the terms, for what it's worth.
The term "responsive design" was developed by Groves and Heeringa (2006). They coined the term, so I think their definition is the one that should be used. They defined "responsive design" in the following way:
1. Preidentify a set of design features that affect cost and error tradeoffs.
2. Identify indicators for these costs and errors. Monitor these during data collection.
3. Alter the design features based on pre-identified decision rules based on the indi…
### Tailoring vs. Targeting
One of the chapters in a recent book on surveying hard-to-reach populations looks at "targeting and tailoring" survey designs. The chapter references this paper on the use of the terms among those who design health communication. I thought the article was an interesting one. They start by saying that "one way to classify message strategies like tailoring is by the level of specificity with which characteristics of the target audience are reflected in the the communication."
That made sense. There is likely a continuum of specificity ranging from complete non-differentiation across units to nearly individualized. But then the authors break that continuum and try to define a "fundamental" difference between tailoring and targeting. They say targeting is for some subgroup while tailoring is to the characteristics of the individual. That sounds good, but at least for surveys, I'm not sure the distinction holds.
In survey design, what would constitute tail…
### An Experimental Adaptive Contact Strategy
I'm running an experiment on contact methods in a telephone survey. I'm going to present the results of the experiment at the FCSM conference in November. Here's the basic idea.
Multi-level models are fit daily with the household being a grouping factor. The models provide household-specific estimates of the probability of contact for each of four call windows. The predictor variables in this model are the geographic context variables available for an RDD sample.
Let $\mathbf{X_{ij}}$ denote a $k_j \times 1$ vector of demographic variables for the $i^{th}$ person and $j^{th}$ call. The data records are calls. There may be zero, one, or multiple calls to household in each window. The outcome variable is an indicator for whether contact was achieved on the call. This contact indicator is denoted $R_{ijl}$ for the $i^{th}$ person on the $j^{th}$ call to the $l^{th}$ window. Then for each of the four call windows denoted $l$, a separate model is fit where each household is assum… | {} |
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
## Solve quadratic equations using the formula
Estimated17 minsto complete
%
Progress
Progress
Estimated17 minsto complete
%
Deriving and Using the Quadratic Formula
The profit on your school fundraiser is represented by the quadratic expression , where p is your price point. What is your break-even point (i.e., the price point at which you will begin to make a profit)? Hint: Set the equation equal to zero.
The last way to solve a quadratic equation is the Quadratic Formula. This formula is derived from completing the square for the equation (see #13 from the Problem Set in the previous concept). We will derive the formula here.
#### Investigation: Deriving the Quadratic Formula
Walk through each step of completing the square of .
1. Move the constant to the right side of the equation.
2. “Take out” from everything on the left side of the equation.
3. Complete the square using .
4. Add this number to both sides. Don’t forget on the right side, you need to multiply it by (to account for the outside the parenthesis).
5. Factor the quadratic equation inside the parenthesis and give the right hand side a common denominator.
6. Divide both sides by .
7. Take the square root of both sides.
8. Subtract from both sides to get by itself.
This formula will enable you to solve any quadratic equation as long as you know , and (from ).
#### Solve the following problems using Quadratic Formula
First, make sure one side of the equation is zero. Then, find and . . Now, plug in the values into the formula and solve for .
Let’s get everything onto the left side of the equation.
Now, use and and plug them into the Quadratic Formula.
Solve by factoring, completing the square, and the Quadratic Formula.
While it might not look like it, 51 is not a prime number. Its factors are 17 and 3, which add up to 20.
Now, solve by completing the square.
Lastly, let’s use the Quadratic Formula. .
Notice that no matter how you solve this, or any, quadratic equation, the answer will always be the same.
### Examples
#### Example 1
The break-even point is the point at which the equation equals zero. So use the Quadratic Formula to solve for p.
Now, use and and plug them into the Quadratic Formula.
Therefore, there are two break-even points: .
and
#### Example 3
Solve using all three methods.
Factoring: . The factors of -30 that add up to -1 are -6 and 5. Expand the term.
Complete the square
### Review
Solve the following equations using the Quadratic Formula.
Choose any method to solve the equations below.
Solve the following equations using all three methods.
1. Writing Explain when you would use the different methods to solve different types of equations. Would the type of answer (real or imaginary) help you decide which method to use? Which method do you think is the easiest?
To see the Review answers, open this PDF file and look for section 5.13.
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
### Vocabulary Language: English
Binomial
A binomial is an expression with two terms. The prefix 'bi' means 'two'.
Completing the Square
Completing the square is a common method for rewriting quadratics. It refers to making a perfect square trinomial by adding the square of 1/2 of the coefficient of the $x$ term.
The quadratic formula states that for any quadratic equation in the form $ax^2+bx+c=0$, $x=\frac{-b \pm \sqrt{b^2-4ac}}{2a}$.
Roots
The roots of a function are the values of x that make y equal to zero.
Square Root
The square root of a term is a value that must be multiplied by itself to equal the specified term. The square root of 9 is 3, since 3 * 3 = 9.
Vertex
The vertex of a parabola is the highest or lowest point on the graph of a parabola. The vertex is the maximum point of a parabola that opens downward and the minimum point of a parabola that opens upward. | {} |
# Microbusinesses and Occupational Stress: Emotional Demands, Job Resources, and Depression Among Korean Immigrant Microbusiness Owners in Toronto, Canada
## Article information
J Prev Med Public Health. 2019;52(5):299-307
Publication date (electronic) : 2019 August 16
doi : https://doi.org/10.3961/jpmph.19.134
1Department of Health Policy Research, Seoul Health Foundation, Seoul, Korea
2Social and Epidemiological Research, Centre for Addiction and Mental Health, Toronto, ON, Canada
3Department of Psychiatry, University of Toronto, Toronto, ON, Canada
4Western University Schulich School of Medicine and Dentistry, Toronto, ON, Canada
Corresponding author: Il-Ho Kim, PhD Department of Health Policy Research, Seoul Health Foundation, 41 Manrije-ro 24-gil, Yongsan-gu, Seoul 04303, Korea E-mail: kihsdh@hanmail.net
Received 2019 June 3; Accepted 2019 August 2.
## Abstract
### Objectives
While occupational stress has long been a central focus of psychological research, few studies have investigated how immigrant microbusiness owners (MBOs) respond to their unusually demanding occupation, or how their unresolved occupational stress manifests in psychological distress. Based on the job demands-resources model, this study compared MBOs to employees with regard to the relationships among emotional demands, job resources, and depressive symptoms.
### Methods
Data were derived from a cross-sectional survey of 1288 Korean immigrant workers (MBOs, professionals, office workers, and manual workers) aged 30 to 70, living in Toronto and surrounding areas. Face-to-face interviews were conducted between March 2013 and November 2013.
### Results
Among the four occupational groups, MBOs appeared to endure the greatest level of emotional demands, while reporting relatively lower levels of job satisfaction and job security; but MBOs reported the greatest job autonomy. The effect of emotional demands on depressive symptoms was greater for MBOs than for professionals. However, an inspection of stress-resource interactions indicated that though MBOs enjoyed the greatest autonomy, the protective effects of job satisfaction and security on the psychological risk of emotional demands appeared to be more pronounced for MBOs than for any of the employee groups.
### Conclusions
One in two Korean immigrants choose self-employment, most typically in family-owned microbusinesses that involve emotionally taxing dealings with clients and suppliers. However, the benefits of job satisfaction and security may protect MBOs from the adverse mental health effects of job stress.
## INTRODUCTION
The primary goal of this study is to augment the current research on job demands and resources as an important determinant of mental health by extending the discussion to immigrant microbusiness owners (MBOs). Specifically, this study examines psychological health variations related to emotional demands and 3 job resources (job autonomy, satisfaction, and security) by comparing immigrant MBOs and employees. Job demands are defined as physiological, psychological, and emotional aspects of a job that require workers’ skills and continuous effort [1]. In particular, occupational stress theory posits that emotional demands are significantly harmful to workers’ mental health, engendering conditions such as emotional exhaustion, burnout, and depression [2,3]. In fact, a large volume of empirical research has documented links between emotional demands and psychological problems among diverse employees, including managers, doctors, police officers, and home care workers [4-6]. In contrast to the growing knowledge on the harmful effects of emotional demands on employees, the evidence regarding immigrant MBOs is very limited. As frontline service workers, immigrant MBOs may be more vulnerable than immigrant employees to the health effects of emotional burdens [7]. They are mostly engaged in low-yield businesses with long work hours requiring a great deal of emotional regulation in daily interactions with customers [8,9]. Current research further suggests that the burden of emotional labour is not limited to the service sector [10].
Subsequent research has continued to support the job demands-resources (JD-R) model, according to which occupational resources such as job autonomy, job satisfaction, and job security may ultimately alleviate the psychological toll of occupational stress and improve workers’ well-being [5,11]. In other words, high emotional demands and low job resources can exacerbate stress-induced psychological strain [12]. Accumulating evidence also suggests that results may differ across occupations and individual workers according to the level of occupational resources and the psychological meaning of resources [13,14]. Despite these arguments, occupational stress research has rarely examined the emotional demand-health link based on job resources among immigrant MBOs. In addition, employees with high levels of job resources are more inclined to deal successfully with highly demanding work conditions; however, it is still unknown whether the JD-R model also applies to immigrant MBOs. Being a business owner and working as one’s own boss may generate psychological benefits related to having greater flexibility and autonomy than employees [15]. In the present study, we assessed job resources at the level of job autonomy, job satisfaction, and job security, since the typical measures of job resources such as social support from colleagues or supervisors are not applicable in a microbusiness setting [1]. This study is the first to investigate the interactive processes of emotional demands and occupational resources among immigrant MBOs in comparison to employees.
## METHODS
Data for this study were obtained from the 2013 Ontario Korean Business Occupational Stress Study (OKBOSS), a cross-sectional survey of first-generation Korean immigrant workers aged 30 to 70 living in the Greater Toronto Area and surrounding regions. The respondents were all foreign-born permanent residents or naturalized citizens of Canada who had been working in Canada for more than 2 years. After obtaining written consent from each of the total 1288 participants, face-to-face interviews were carried out from March 2013 to November 2013. Since Korean immigrant business owners in Ontario are a hard-to-reach population for research, quota sampling was applied as the most viable method of data collection. The Ontario Korean Businessmen’s Association directory was used to recruit MBOs. To increase representativeness, the MBO sample matched the employee sample in terms of age (30-70 years), sex, occupational groups, and regional distribution. Steps were taken to ensure that only 1 respondent per household and only 1 respondent per business was interviewed. Similarly, for employees, participation was restricted to only 1 worker per workplace for companies with fewer than 50 employees, and 2 workers per workplace for companies with 50 employees or more. The final sample included 550 MBOs, 258 professionals, 223 office workers (non-manual), and 257 skilled/unskilled (manual) workers. Approximately 50% of the participants were from the Greater Toronto Area, while the remaining 50% were from other regions of Southern Ontario, including the Hamilton-Niagara region and Owen Sound.
### Measurements
#### Depressive symptoms
The Center for Epidemiologic Studies Depression (CES-D) scale was utilized to evaluate depressive symptoms experienced by participants. The CES-D, originally a 20-item scale designed by the National Institute of Mental Health, measures 4 domains of depressive symptomatology: depressive mood, somatic symptoms, social withdrawal, and positive affect. However, several empirical studies revealed a cultural response bias for the positive affect items (i.e., happy, high self-esteem, hopeful, and joy in life) among certain Asian groups; therefore, for this study, the 4 positive items were eliminated. Each answer was rated on a 4-point Likert scale, ranging from 0 for rarely or none of the time (less than 1 day) to 3 for most or all of the time (5 days to 7 days a week). The overall scores ranged from 0 to 48, with a higher score indicating greater depressive symptoms. The 16-item CES-D scale had very good internal consistency, with a Cronbach’s alpha of 0.926.
#### Emotional demands
A 10-item emotional demand scale obtained from the previous literature was used to measure emotional demands [6,7]. The scale consisted of 4 aspects of emotional demands: the need to show positive or negative emotions (3 items), emotional sympathy (3 items), demand for sensitivity (3 items), and emotional suppression (2 items). Responses were selected on a 5-point Likert scale (never, rarely, sometimes, often, and always). We validated the unidimensional factor structure found in the entire sample and each occupational group, ensuring that the scale provided a short, valid measurement of emotional demands across the 4 occupational groups. The overall internal reliability was 0.883, while for the various occupational groups, internal reliability ranged from 0.825 in the MBO group to 0.908 in the non-manual group.
#### Occupational resources
Occupational resources were measured by the following 3 commonly used scales: job authority, job satisfaction, and job security. The job autonomy scale consisted of 4 items: task control, decision-making responsibility, decision freedom, and work schedule flexibility. Each of the resource items was rated on a 4-point scale, ranging from 0 for “strongly disagree” to 3 for “strongly agree.” The total summed scores of job autonomy ranged from 0 to 12. The internal reliability was 0.74.
Job satisfaction was measured by the Overall Job Satisfaction Scale of the Michigan Organizational Assessment Scale [16]. The 4 items measured fulfillment from work, willingness to choose the same job again, the job’s meaningfulness, and future prospects. This scale was found to be reliable across a variety of occupational groups (α=0.826). In our study, perceived job security was measured with 2 questions related to the past and present maintenance of steady employment (α=0.426).
#### Main independent variable and potential covariates
Participants were divided into 4 occupational groups based on the International Standard Classification of Occupations. The professional group (reference) consisted of occupations including managers, accountants, engineers, nurses, dentists, teachers, and professors. The non-manual group was composed of jobs such as accounting assistants, clerical workers, operators, paralegals, and technicians. The manual group (skilled/unskilled) included auto-shop mechanics, repairers, drivers, home care workers, security officers, and manual labourers. MBOs operated businesses such as convenience stores, dry cleaners, restaurants, flower shops, hair or nail salons, shoe repair shops, and clothing stores.
Potential covariates included demographic factors, socioeconomic status, and years since immigration. Chronological age (30-70 years) and years since immigration were analysed as continuous variables. Marital status was collapsed into 2 groups, with currently married as the reference group and all others as the risk category. Pre-migration education attainment was divided into “completed high school or less” and “completed more than high school (reference).” Post-migration education was classified into “no” and “yes (reference).” Household income earned from all sources was analysed in terms of 8 income categories ranging from under US$20 000 to US$200 000 or more. Participants were divided into 3 groups based on annual household income: under US$40 000, US$40 000 to US$79 999, and US$80 000 or more (reference).
Occupational difficulties were measured by 2 variables: language barriers and working hours. To assess language barriers, participants were asked how often problems with English caused them to experience difficulties at work in the following situations: (1) completing tasks, (2) negotiating or interacting with others, (3) dealing with official documents, and (4) getting fair treatment from others (landlords, sales clerks, etc.). The summed scores ranged from 0 to 15, with a higher score indicating greater language difficulties. The internal reliability was 0.899. Working hours were measured as a continuous variable based on the average number of hours worked per week at the respondent’s current job (20 hours or more).
### Statistical Analysis
We took a 2-step analytical approach. First, using the chi-square test and analysis of variance, descriptive statistics were obtained to estimate the diverse characteristics of MBOs and employees (professionals, and non-manual and manual workers). Second, through multiple regression analyses, we tested a number of hypotheses derived from the occupational stress model and JD-R model for immigrant MBOs and employees. This test allowed us to evaluate whether the interaction terms produced significant differences among the occupational groups. All analyses were performed SAS version 9.4 (SAS Institute Inc., Cary, NC, USA).
### Ethics Statement
This study was approved by the Office of Research Ethics at the Centre for Addition and Mental Health (protocol #119).
## RESULTS
Table 1 shows the descriptive characteristics of the study sample of immigrant MBOs and employees. Each occupational group of immigrants made up approximately 20% of the total sample, except the MBO group (42.7%). The sample consisted of similar proportions of males (51.3%) and females (48.7%). Approximately 87.7% of participants were married, whereas only 12.3% were never married or previously married. More than three-quarters of MBOs had a college or higher degree in their pre-migration education, but only 13.1% of them had earned post-migration educational certificates. In contrast, 62.4% of professionals, 56.9% of non-manual workers and 22.1% of manual workers had earned post-migration educational certificates in Canada. Approximately 29.6% of MBOs and 41.6% of manual workers reported annual household earnings of under US$40 000, while only 8.5% of professionals and 18.4% of non-manual workers earned under US$40 000.
Socio-demographic characteristics according to occupational group
The mean age of participants was 49.7 years, and their mean length of residence in Canada was 16.3 years. Compared to other occupational groups, the MBO group was significantly older (53.6 years old) with longer terms of residence (17.2 years) in Canada.
MBOs worked the longest hours (58.8 hours per week), while other occupational groups worked an average of 40 hours per week. The overall mean score for emotional demands was 18.1, and MBOs were the most likely to face emotional demands (mean, 19.7). MBOs had the highest level of job autonomy and a modest level of job security, but a lower level of job satisfaction than other occupational groups. The mean depression score for Korean immigrant workers was 6.2, and MBOs and manual workers reported higher levels of depressive symptoms (mean, 6.4 and 7.4, respectively) than professionals and non-manual workers.
Table 2 presents the main effects of emotional demands and occupational variations in the link between emotional demands and depressive symptoms among Korean immigrant workers. As shown in model 1, being female, not married, and low-income were significantly associated with higher levels of depressive symptoms. While emotional demands were likely to increase the risk of depressive symptoms (model 2), MBOs were more likely than employees to experience an increase in depressive symptoms with an increase in emotional demands (β, 0.145; p<0.05; model 3). After adding additional language barriers and work hours to model 4, occupational variations in this link were no longer significant.
Associations between emotional demands and depression according to occupational group
As shown in Table 3 (model 1), the 2-way interaction test (emotional demands by job resources) revealed that all 3 job resources significantly buffered the negative health effects of emotional demands: job autonomy (β, -0.021; p<0.05), job satisfaction (β, -0.027; p<0.01), and job security (β, -0.070; p<0.001).
Associations among emotional demands, occupational resources, and depression according to occupational group
Model 2 shows the results of occupational variations in the buffering effect of job resources on the mental health risk of emotional demands. A 3-way interaction test (emotional demands by job resources by occupational status) showed that the buffering effects of job satisfaction and job insecurity were more prominent among MBOs than other occupational groups (β, -0.099; p<0.001 for job satisfaction; β, -0.298; p<0.001 for job security; model 2) (Figure 1). However, there were no specific occupational variations in the buffering effect of job autonomy.
The moderating effect of job resources on the association between emotional demands and depressive symptoms (A) job autonomy, (B) job satisfaction, and (C) job security. The buffering effects of 3 types of occupational resources on the relationship between emotional demandss and depressive symptoms among professionals and MBOs. The other occupational groups such as non-manual and manual workers, which followed a pattern similar to that of the professional group, have been omitted. The low and high occupational resource effects are depicted with standard scores of 1 SD below (SD-1) and above (SD+1) the mean, respectively. MBOs, microbusiness owners; SD, standard deviation.
Figure 1 shows that the buffering effect of job security and job satisfaction on the health risk of emotional demands was more pronounced among MBOs than among professionals. However, the magnitude and direction of the buffering effect of job autonomy were higher among MBOs than among other occupational groups, but not to a significant extent.
## DISCUSSION
Our analysis of the OKBOSS survey data compared immigrant MBOs and employees in terms of the relationships among emotional demands, occupational resources, and depressive symptoms. After controlling for demographic and socioeconomic factors, this study confirmed a positive relationship between emotional demands and depressive symptoms, which is fully consistent with previous findings [5,6,14]. However, immigrant MBOs face elevated levels of emotional demands and are the most vulnerable to the concomitant adverse mental health problems among occupational groups [9,17]. Our study revealed that the substantially higher perception of emotional demands and accompanying greater health risk among immigrant MBOs stemmed from their difficult work environment, particularly their long working hours, and language-related challenges. In this study, MBOs reported working an average of 58.8 hours per week, while employees worked on average 40 hours per week. In North America, Korean immigrant MBOs are known to work the longest hours compared to immigrant employees and MBOs of other ethnicities [9,18]. In comparison to immigrant professionals and office workers, immigrant MBOs also appear to struggle more with language problems, reporting difficulties in completing tasks at work, interacting with others, and dealing with official documents. A qualitative study conducted in the USA illustrated that a lack of language proficiency elevated business-related stress and engendered psychological distress, while proper communication was an important asset in service businesses [19].
In line with the JD-R model, our findings show that all 3 types of occupational resources (autonomy, satisfaction, security) seemed to play a crucial role in buffering the negative effects of emotional demands on depressive symptoms among Korean-Canadian workers. This study’s findings are in agreement with assumptions and empirical evidence from earlier studies on occupational distress [11,20]. Job resources such as autonomy, satisfaction, and security are closely related to motivation and can stimulate performance and increase productivity, which in turn can result in desired health outcomes, such as lower job accidents and greater well-being [21,22]. The self-determination theory also describes how job autonomy and job engagement mitigate the psychological toll of high job demands [11,23].
In this present study, however, the beneficial effects seemed to be significantly more pronounced for immigrant MBOs than employees. In particular, when MBOs felt a strong sense of job satisfaction and security, the effects of emotional demands on depressive symptoms were no longer significant (Figure 1). Despite experiencing the highest levels of emotional demands, when MBOs fulfill their financial goals and achieve security, they may be more likely than employees to improve their mental health and wellbeing [11,20]. Nonetheless, job autonomy was not enough to reduce the psychological impact of emotional demands for either immigrant MBOs or employees, even though MBOs appeared to enjoy the highest level of job autonomy. Similar findings were reported in a recent meta-analytic review of 106 studies [24], despite a substantial body of empirical research reporting a buffering role of job autonomy on the demand-health link [25]. Warr [26]’s vitamin theory of job resources is noteworthy: similar to the intoxication effect of an overdose of vitamins, high levels of job autonomy can cause a high risk of emotional exhaustion. It is highly plausible that employees with higher job autonomy may also feel burdened by their high responsibility or have difficulty with decision-making. Indeed, several empirical studies support this argument, suggesting an inverted U-shape pattern between job resources and health [3,12]. In our study, however, the buffering roles of job autonomy were trivial not only among professionals and MBOs with high autonomy, but also among all employees, including non-manual and manual workers with low autonomy. The discrepancy across results indicates that the potential moderating effects of job resources can vary across occupations, working populations, and job environments.
The present study has several limitations. First, data were collected through convenience sampling targeting Korean Canadian immigrants, so it may not be possible to generalize the findings to the whole immigrant population in Canada. Second, the data on work characteristics such as emotional demands and job resources were collected by self-reporting, which may involve a reporting bias. However, the consistency of this study’s findings according to the JD-R model suggests that the composite index may not be a major drawback in this study. Third, caution should be taken when interpreting the dimensionality of the 16-item CES-D. Despite its lack of diagnostic validity, this 16-item CES-D scale appears useful and valid in screening study participants for depressive symptoms for hypothesis-testing among certain Asian groups [27]. Finally, due to an inadequate sample size, we could not consider specific occupations that may suffer from high levels of emotional demands, to compare the health consequences of emotional demands among MBOs; instead occupational status was categorized into 4 broad occupational groups (MBOs and professional, non-manual, and manual employees). Nonetheless, this study provides unique findings comparing the work stressors and resources of MBOs to those of employees, whereas there is a significant volume of literature and empirical studies illustrating the link between job demands and their health consequences in a wide range of occupations.
In conclusion, our study shows that MBOs experienced greater emotional demands, and their adverse impact on mental health was more marked for MBOs than for employees. Notably, the highest risk of depressive symptoms from emotional demands among MBOs was attributable to their long working hours and high level of language difficulties. Even though emotional demands are a crucial factor harming the psychological health of MBOs, the effectiveness of job satisfaction and security in compensating for such negative health effects is greater among MBOs than employees.
## Notes
CONFLICT OF INTEREST
The authors have no conflicts of interest associated with the material presented in this paper.
AUTHOR CONTRIBUTIONS
Conceptualization: IHK, SN. Data curation: IHK, SN. Funding acquisition: IHK, SN, KM. Formal analysis: IHK, CC. Methodology: IHK, CC. Writing - original draft: IHK. Writing - review & editing: IHK, SN, CC, KM.
## Acknowledgements
This research project was supported by the Canadian Institutes of Health Research in 2013.
## References
1. Schaufeli WB, Bakker AB. Job demands, job resources, and their relationship with burnout and engagement: a multi‐sample study. J Organ Behav 2004;25(3):293–315.
2. Centers for Disease Control and Prevention. National Institute for Occupational Safety and Health (NIOSH) publications and products: stress at work. 1999. [cited 2019 Jun 1]. Available from: https://www.cdc.gov/niosh/docs/99-101/default.html.
3. Bakker AB, Demerouti E, Euwema MC. Job resources buffer the impact of job demands on burnout. J Occup Health Psychol 2005;10(2):170–180.
4. Lewig KA, Dollard MF. Emotional dissonance, emotional exhaustion and job satisfaction in call centre workers. Eur J Work Organ Psychol 2003;12(4):366–392.
5. de Jonge J, Le Blanc PM, Peeters MC, Noordam H. Emotional job demands and the role of matching job resources: a cross-sectional survey study among health care workers. Int J Nurs Stud 2008;45(10):1460–1469.
6. Kim IH, Noh S, Muntaner C. Emotional demands and the risks of depression among homecare workers in the USA. Int Arch Occup Environ Health 2013;86(6):635–644.
7. Zapf D, Vogt C, Seifert C, Mertini H, Isic A. Emotion work as a source of stress: the concept and development of an instrument. Eur J Work Organ Psychol 1999;8(3):371–400.
8. Parslow RA, Jorm AF, Christensen H, Rodgers B, Strazdins L, D’Souza RM. The associations between work stress and mental health: a comparison of organizationally employed and self-employed workers. Work Stress 2004;18(3):231–244.
9. Teixeira C, Lo L, Truelove M. Immigrant entrepreneurship, institutional discrimination, and implications for public policy: a case study in Toronto. Environ Plan C Gov Policy 2007;25(2):176–193.
10. Olsson E, Ingvad B. The emotional climate of care-giving in home-care services. Health Soc Care Community 2001;9(6):454–463.
11. Demerouti E, Bakker AB, Nachreiner F, Schaufeli WB. The job demands-resources model of burnout. J Appl Psychol 2001;86(3):499–512.
12. De Jonge J, Schaufeli WB. Job characteristics and employee well‐being: a test of Warr’s Vitamin Model in health care workers using structural equation modelling. J Organ Behav 1998;19(4):387–407.
13. Theorell T, Karasek RA. Current issues relating to psychosocial job strain and cardiovascular disease research. J Occup Health Psychol 1996;1(1):9–26.
14. Brotheridge CM, Grandey AA. Emotional labor and burnout: comparing two perspectives of “people work”. J Vocat Behav 2002;60(1):17–39.
15. Baldwin J, Bian L, Dupuy R, Gellatly G. Failure rate for new Canadian firms, new perspectives on entry and exit Ottawa: Statistics Canada; 2000. p. 43–48.
16. Lawler E, Cammann C, Nadler D, Jenkins D. Michigan organizational assessment questionnaire Washington, DC: American Psychological Association; 1979.
17. Baldwin J, Gray T, Johnson J, Proctor J, Rafiquzzaman M, Sabourin D. Failing concerns, business bankruptcy in Canada Ottawa: Statistics Canada; 1997. p. 23–31.
18. Min PG. Problems of Korean immigrant entrepreneurs. Int Migr Rev 1990;24(3):436–455.
19. Kang M. The managed hand: the commercialization of bodies and emotions in Korean immigrant–owned nail salons. Gend Soc 2003;17(6):820–839.
20. Dekker SW, Schaufeli WB. The effects of job insecurity on psychological health and withdrawal: a longitudinal study. Aust Psychol 1995;30(1):57–63.
21. Brown C, Reich M, Stern D. Becoming a high-performance work organization: the role of security, employee involvement, and training. 1992. [cited 2019 Jun 1]. Available from: http://irle.berkeley.edu/workingpapers/45-92.pdf.
22. Ilardi BC, Leone D, Kasser T, Ryan RM. Employee and supervisor ratings of motivation: main effects and discrepancies associated with job satisfaction and adjustment in a factory setting 1. J Appl Soc Psychol 1993;23(21):1789–1805.
23. Deci E, Ryan RM. Intrinsic Motivation and self-determination in human behavior New York: Springer; 1985. p. 9–10.
24. Luchman JN, González-Morales MG. Demands, control, and support: a meta-analytic review of work characteristics interrelationships. J Occup Health Psychol 2013;18(1):37–52.
25. Bakker AB, Demerouti E, De Boer E, Schaufeli WB. Job demands and job resources as predictors of absence duration and frequency. J Vocat Behav 2003;62(2):341–356.
26. Warr P. Work, unemployment, and mental health New York: Oxford University Press; 1987. p. 9–14.
27. Noh S, Kaspar V, Chen X. Measuring depression in Korean immigrants: assessing validity of the translated Korean version of CES-D scale. Cross Cult Res 1998;32(4):358–377.
## Article information Continued
### Figure. 1.
The moderating effect of job resources on the association between emotional demands and depressive symptoms (A) job autonomy, (B) job satisfaction, and (C) job security. The buffering effects of 3 types of occupational resources on the relationship between emotional demandss and depressive symptoms among professionals and MBOs. The other occupational groups such as non-manual and manual workers, which followed a pattern similar to that of the professional group, have been omitted. The low and high occupational resource effects are depicted with standard scores of 1 SD below (SD-1) and above (SD+1) the mean, respectively. MBOs, microbusiness owners; SD, standard deviation.
### Table 1.
Socio-demographic characteristics according to occupational group
Characteristics Total Microbusiness owners Professionals Non-manual workers Manual workers p-value
Total 1288 (100) 550 (42.7) 258 (20.0) 223 (17.3) 257 (20.0)
Sex 0.001
Male 661 (51.3) 293 (53.3) 146 (56.6) 111 (49.8) 111 (43.2)
Female 627 (48.7) 257 (46.7) 112 (43.4) 112 (50.2) 146 (56.8)
Marital status 0.069
Married 1129 (87.7) 525 (95.5) 208 (81.0) 183 (82.1) 212 (82.5)
Not married 159 (12.3) 25 (4.5) 49 (19.0) 40 (17.9) 45 (17.5)
Pre-migration education 0.673
High school or less 350 (27.2) 130 (23.6) 77 (29.8) 72 (32.3) 71 (27.6)
College or more 938 (72.8) 420 (76.4) 181 (70.2) 151 (67.7) 186 (72.4)
Post-migration education <0.001
No 871 (67.6) 478 (86.9) 97 (37.6) 96 (43.1) 200 (77.9)
Yes 417 (32.4) 72 (13.1) 161 (62.4) 127 (56.9) 57 (22.1)
Annual household income (US$) <0.001 <40 000 333 (25.8) 163 (29.6) 22 (8.5) 41 (18.4) 107 (41.6) 40 000-79 999 560 (43.5) 267 (48.6) 74 (28.7) 99 (44.4) 120 (46.7) ≥80 000 395 (30.7) 120 (21.8) 162 (62.8) 83 (37.2) 30 (11.7) Age 49.72±8.96 53.59±7.28 45.62±8.56 44.64±8.82 49.95±8.80 <0.001 Years since immigration 16.30±8.86 17.16±8.75 16.56±9.56 15.94±8.44 14.80±8.55 0.005 Working hours 48.29±15.03 58.76±16.09 40.87±6.60 39.25±5.43 41.15±9.84 <0.001 Language barriers 4.72±3.69 5.59±3.67 3.16±3.18 3.39±3.15 5.60±3.72 <0.001 Emotional demands 18.08±5.83 19.69±4.40 16.40±6.74 16.52±6.63 17.67±5.92 <0.001 Job autonomy 7.68±2.50 8.82±2.05 8.08±2.13 6.73±2.10 5.66±2.47 <0.001 Job security 4.22±1.15 4.28±0.97 4.58±1.13 4.25±1.20 3.71±1.31 <0.001 Job satisfaction 6.14±2.62 5.40±2.41 8.11±1.90 7.06±2.20 4.94±2.66 <0.001 Depression 6.19±7.13 6.37±7.35 5.07±5.71 5.65±6.51 7.38±8.21 0.002 Sleep deprivation 12.77±4.36 13.04±4.50 12.07±4.02 12.33±4.45 13.30±4.22 0.002 Values are presented as number (%) or mean±standard deviation. ### Table 2. Associations between emotional demands and depression according to occupational group Variables Model 1 Model 2 Model 3 Model 4 Intercept 3.834* 4.870* 4.315* 1.783 Age -0.046 -0.037 -0.033 -0.063* Sex Male Reference Female 0.877* 0.142 0.269 0.367 Marital status Married Reference Not married 2.804*** 2.774*** 2.753*** 2.857*** Occupation Microbusiness owners 0.890 -0.059 -0.085 -0.767 Professionals Reference Non-manual workers 0.077 0.263 0.282 0.351 Manual workers 1.183 0.951 1.062 0.872 Years of immigration 0.032 0.032 0.032 0.071* Previous education High school or less -0.067 0.149 0.166 -0.015 College or more Reference Canadian education No 0.866 1.099* 1.000 0.513 Yes Reference Annual income (US$)
≥80 000 Reference
40 000-79 999 1.116* 0.508 0.624 0.532
<40 000 2.300*** 1.624** 1.742** 1.329*
Working hours 0.034*
Language barriers 0.305***
Emotional demands 0.245*** 0.175*** 0.159
Occupation*Emotional demands
Non-manual workers 0.020 0.023
Manual workers 0.097 0.089
F for change in R2 6.57 19.88 10.64 11.44
R2 0.046 0.100 0.101 0.121
All predictor variables were mean-centered.
*
p<0.05,
**
p<0.01,
***
p<0.001.
### Table 3.
Associations among emotional demands, occupational resources, and depression according to occupational group
Variables Job autonomy
Job satisfaction
Job security
Model 1 Model 2 Model 1 Model 2 Model 1 Model 2
Intercept 1.716 2.348 3.119* 3.081 1.617 2.659
Age -0.073* -0.077** -0.073* -0.077** -0.057* -0.063*
Sex
Male Reference Reference Reference
Female 0.073 0.217 0.502 0.655 0.505 0.624
Marital status
Married Reference Reference Reference
Not married 2.728*** 2.678*** 2.902*** 2.780*** 2.898*** 2.850***
Years of immigration 0.070* 0.071* 0.064* 0.062* 0.071* 0.069*
Previous education
High school or less -0.163 -0.081 -0.011 -0.062 -0.210 -0.156
College or more Reference Reference Reference
No 0.452 0.399 0.271 0.170 0.469 0.494
Yes Reference Reference Reference
Annual income (US\$)
≥80 000 Reference Reference Reference
40 000-79 999 0.355 0.489 0.204 0.303 -0.105 -0.059
<40 000 1.174* 1.295* 0.989 1.083* 0.435 0.550
Occupation
Microbusiness owners -0.349 -0.571 -2.254*** -1.718* -0.809 -0.754
Professionals Reference Reference Reference
Non-manual workers -0.121 -0.143 -0.396 0.073 -0.001 0.154
Manual workers 0.100 -0.377 -1.137 -0.622 -0.132 0.046
Working hours 0.041* 0.044** 0.033* 0.036* 0.034* 0.033*
Language barriers 0.282*** 0.272*** 0.257*** 0.259*** 0.239*** 0.228***
Emotional demands 0.221*** 0.168** 0.219*** 0.026 0.222*** 0.171**
Job resources -0.363*** -0.229 -0.728*** -0.592** -1.563*** -1.081**
Emotional demands*Resources -0.021* -0.004 -0.027** 0.033 -0.070*** -0.010
Occupation*Emotional demands
Non-manual workers -0.013 0.166* 0.109
Manual workers -0.014 0.205** 0.016
Occupation*Job resources
Non-manual workers -0.146 -0.170 0.092
Manual workers -0.448 -0.189 -0.760
Occupation*Demands*Resources
Non-manual workers -0.014 -0.062 0.017
Manual workers -0.047 -0.034 0.013
F for change in R2 13.45 9.25 19.23 12.23 19.38 14.15
R2 0.134 0.138 0.185 0.179 0.186 0.203
All predictor variables were mean-centered.
*
p<0.05,
**
p<0.01,
***
p<0.001. | {} |
rod2dcm
Convert Euler-Rodrigues vector to direction cosine matrix
Syntax
``dcm=rod2dcm(R)``
Description
example
````dcm=rod2dcm(R)` function calculates the direction cosine matrix, for a given Euler-Rodrigues (also known as Rodrigues) vector, `R`.```
Examples
collapse all
Determine the direction cosine matrix from the Euler-Rodrigues vector.
```r = [.1 .2 -.1]; DCM = rod2dcm(r)```
```DCM = 0.9057 -0.1509 -0.3962 0.2264 0.9623 0.1509 0.3585 -0.2264 0.9057```
Input Arguments
collapse all
M-by-3 matrix containing M Rodrigues vectors.
Data Types: `double`
Output Arguments
collapse all
3-by-3-by-M containing M direction cosine matrices.
Algorithms
An Euler-Rodrigues vector $\stackrel{⇀}{b}$ represents a rotation by integrating a direction cosine of a rotation axis with the tangent of half the rotation angle as follows:
`$\stackrel{\to }{b}=\left[\begin{array}{ccc}{b}_{x}& {b}_{y}& {b}_{z}\end{array}\right]$`
where:
`$\begin{array}{l}{b}_{x}=\mathrm{tan}\left(\frac{1}{2}\theta \right){s}_{x},\\ {b}_{y}=\mathrm{tan}\left(\frac{1}{2}\theta \right){s}_{y},\\ {b}_{z}=\mathrm{tan}\left(\frac{1}{2}\theta \right){s}_{z}\end{array}$`
are the Rodrigues parameters. Vector $\stackrel{⇀}{s}$ represents a unit vector around which the rotation is performed. Due to the tangent, the rotation vector is indeterminate when the rotation angle equals ±pi radians or ±180 deg. Values can be negative or positive.
References
[1] Dai, J.S. "Euler-Rodrigues formula variations, quaternion conjugation and intrinsic connections." Mechanism and Machine Theory, 92, 144-152. Elsevier, 2015.
Version History
Introduced in R2017a | {} |
Journal of Materials Science 【-逻*辑*与-】amp; Technology, 2020, 49(0): 70-80 doi: 10.1016/j.jmst.2020.01.051
Research Article
## Kinetic transitions and Mn partitioning during austenite growth from a mixture of partitioned cementite and ferrite: Role of heating rate
Geng Liua, Zongbiao Daia, Zhigang Yanga, Chi Zhanga, Jun Lib, Hao Chen,a,*
a Key Laboratory for Advanced Materials of Ministry of Education, School of Materials Science and Engineering, Tsinghua University, Beijing, 100084, China
b Research Institute of Baoshan Iron and Steel Co., Ltd, Shanghai, 201900, China
Corresponding authors: * E-mail address:hao. chen@mail.tsinghua.edu.cn(H. Chen).
Received: 2019-11-26 Revised: 2020-01-8 Accepted: 2020-01-11 Online: 2020-07-15
Abstract
Austenite formation from a ferrite-cementite mixture is a crucial step during the processing of advanced high strength steels (AHSS). The ferrite-cementite mixture is usually inhomogeneous in both structure and composition, which makes the mechanism of austenite formation very complex. In this contribution, austenite formation upon continuous heating from a designed spheroidized cementite structure in a model Fe-C-Mn alloy was investigated with an emphasis on the role of heating rate in kinetic transitions and element partitioning during austenite formation. Based on partition/non-partition local equilibrium (PLE/NPLE) assumption, austenite growth was found alternately contribute by PLE, NPLE and PLE controlled interfaces migration during slow-heating, while NPLE mode predominately controlled the austenitization by a synchronous dissolution of ferrite and cementite upon fast-heating. It was both experimentally and theoretically found that there is a long-distance diffusion of Mn within austenite of the slow-heated sample, while a sharp Mn gradient was retained within austenite of the fast-heated sample. Such a strong heterogeneous distribution of Mn within austenite cause a large difference in driving force for ferrite or martensite formation during subsequent cooling process, which could lead to various final microstructures. The current study indicates that fast-heating could lead to unique microstructures which could hardly be obtained via the conventional annealing process.
Keywords: Cementite ; Austenite ; Kinetics ; Elements partitioning ; Fast heating
Export EndNote| Ris| Bibtex
Geng Liu, Zongbiao Dai, Zhigang Yang, Chi Zhang, Jun Li, Hao Chen. Kinetic transitions and Mn partitioning during austenite growth from a mixture of partitioned cementite and ferrite: Role of heating rate. Journal of Materials Science & Technology[J], 2020, 49(0): 70-80 doi:10.1016/j.jmst.2020.01.051
## 1. Introduction
Austenitization has attracted much attention owing to its significant roles in controlling the final microstructure and mechanical properties of advanced high-strength steels (AHSS) [[1], [2], [3], [4]]. The initial microstructure of AHSSs prior to austenitization is usually a mixture of pearlite and ferrite (e.g. cementite and ferrite) with a heterogeneous distribution of elements (e.g. C, Mn, etc.). The inhomogeneity in both structure and composition lead to the complicated austenitization behavior upon subsequent heating [5,6].
Speich et al. [7] studied austenite formation in Fe-C-Mn steels with an initial microstructure of pearlite and proeutectoid ferrite. Compared with ferrite, pearlite is thermodynamically and kinetically favorable to transform into austenite due to enrichment of austenite stabilizers (C and Mn). Three kinetic stages during austenite formation were identified: (i) rapid growth of austenite into pearlite; (ii) slower growth of austenite into the remained proeutectoid ferrite; (iii) Mn equilibrium between austenite grains. Nevertheless, due to the structural complexity of pearlite, the role of cementite in austenite formation was not discussed in details.
Different from interval arranged cementite plates in pearlite, the coupled diffusion effects [8] are weakened through spheroidization of cementite during austenite formation, which simplified the related study on its dissolution behavior. Lenel et al. [9] suggested that austenite preferentially nucleates at the interface between ferrite and cementite particles, and it would encircle the cementite particles instantaneously upon nucleation and then extend to both ferrite and cementite. Austenite growth would then proceed via the cooperative migration of the γ/α and γ/θ interfaces, which is closely related to the partitioning behavior of carbon and alloying elements [10].
Miyamoto et al. [11] systematically investigated the effects of Si, Mn and Cr addition on kinetics of isothermal austenite formation from tempered martensite consisting of ferrite and spheroidized cementite in Fe-0.6C steels. It was suggested the alloying elements could partition at the γ/α and γ/θ interfaces and retard austenite growth. The kinetics of isothermal austenite formation from cementite and ferrite have also been simulated using local equilibrium (LE) model [[12], [13], [14], [15]]. The complex kinetic transitions between non-partitioning local equilibrium (NPLE) and partitioning local equilibrium (PLE) were predicted to occur during the migration of γ/α and γ/θ interfaces. In the NPLE mode, the migration of γ/α and γ/θ interfaces is carbon diffusion-controlled and proceeds quite fast, while in the PLE mode, interface migration is sluggish owing to the partitioning of substitutional elements. Nevertheless, austenite formation from cementite and ferrite upon continuous heating was relatively less investigated, while it is of practical interest to steel production. Upon continuous heating, kinetic transitions and alloying element partitioning behaviors at the γ/α and γ/θ interfaces are expected to be thermodynamically and kinetically complex, which should be heating rate dependent. Heating rate in the conventional annealing line is usually near 5 °C/s, and austenite formation upon continuous heating is expected not to be significantly different from isothermal austenite formation.
Recently, the fast-heating (100-300 °C/s) technology was proposed to be a promising technology for producing the strip steels [16,17]. It was found that fast heating not only improves energy efficiency and enhances productivity, it could also lead to unique ultrafine microstructures and significant improvement of the mechanical properties due to the non-equilibrium austenite formation upon fast heating [[18], [19], [20], [21]]. Despite fast heating technology has shown its great potential in the strip steels production, the non-equilibrium austenite formation during fast heating is still not well understood.
In this contribution, austenite formation upon continuous heating from a designed spheroidized cementite structure in Fe-C-Mn alloy was investigated both experimentally and theoretically, focused on clarifying the role of heating rate in kinetic transitions and element partitioning behavior. Combining nano-auger electron spectroscopy equipped with electron backscatter diffraction (AES-EBSD) analysis and local equilibrium simulations, the effects of C and Mn partitioning on the synergetic migration of γ/α and γ/θ interfaces during thermal cycle with slow heating rate (0.1 °C/s) and fast heating rate (100 °C/s) were investigated comparatively. The formation mechanism of the heterogeneous microstructure was discussed in details. In a view of controlled diffusion of elements, we suggested tuning heating rate may open new routes for microstructural design in steels.
## 2. Experimental
The investigated ternary alloy has a chemical composition of Fe-0.23C-1.54 Mn (wt. %). To obtain an initial microstructure with large ferrite grains and uniformly distributed cementite particles, the heat treatment process is designed as illustrated in Fig. 1. The hot-rolled plate with a thickness of 7.5 mm was re-homogenized at 1200 °C for 24 h, and water quenched to room temperature afterwards. The plate was then tempered at 300 °C for appropriate period to make the martensite deformable, and it was subsequently cold rolled to 2.5 mm with a reduction of 67 %. High-temperature (650 °C) tempering treatment for the cold-rolled plate was carried out at the Ar-filled tube furnace for 120 h including heating and cooling process.
Fig. 1. Sketch of pre-treatment process of the test sample.
Samples 2 × 4× 10 mm3 in size were cut from the middle layer of pre-treated plate to avoid the influence of the possible decarburization. Phase transformations during heating and quenching were measured using a DIL-805 A/D type dilatometer. To avoid the possible microstructure changes during heating from room temperature to the start temperature of austenite formation, samples were firstly heated to 650 °C with 100 °C/s. The specimens were then heated at heating rates of 0.1 °C/s and 100 °C /s to various temperatures and quenched to room temperature with a measured cooling rate of 250 °C/s.
Microstructure of the specimens were firstly examined by scanning electron microscopy (SEM) after the mechanical polishment etched with 4 % Nital solution. The further microstructural characterizations were carried out by a SEM equipped with a PHI-710 type nano-auger electron spectroscopy and electron backscatter diffraction (AES-EBSD). Samples were electrolytically polished in a mixed solution of 20 % perchloric acid and 80 % ethanol at 15 V-1.2A for 20 s. EBSD measurements were focused on the thermocouple-welded area of each sample, using a step size of 30 nm, a tilt angle of 70°, an accelerating voltage of 20 Kv and a beam current of 10 nA. The phase constitutes and orientations were analyzed by the TSL-OIM software. A minimum confidence index of neighbor CI correlation was selected as 0.01 to clean up the noise points. At the same location, carbon and manganese profiles were measured through the line-scan mode of Nano-AES with a spatial resolution about 18 nm. The atomic percentage of Mn was quantified with the aid of a group of high-purity Fe-C-Mn standard samples with linearly varying Mn contents, more details can be found in Ref. [22]. It is noteworthy that the line scanning is conducted before the EBSD measurements in order to avoid the effects of electron beam on the element content at the surface.
## 3. Experimental results
### 3.1. Initial microstructures
The image quality phase color map of the investigated steel after spheroidization is shown in Fig. 2(a). Cementite and ferrite are in yellow and gray, respectively. The initial microstructure prior to austenization consists of a sparse distribution of spheroidized cementite in the coarse ferrite matrix. The substructures of the cold rolled martensite were almost eliminated by recrystallization during tempering. Fig. 2(b) exhibits orientation map of ferrite grain and cementite particles in Fig. 2(a). Cementite particles have different orientations even though they are in the same ferrite grain. Fig. 2(c, d) show a Mn profile across a cementite particle and ferrite measured using AES technique. It shows that Mn is highly enriched and uniformly distributed in cementite with a content close to the equilibrium value calculated by Thermo-calc. A statistical measurement for Mn content in cementite with various size and its surrounding ferrite is displayed in Fig. 2(e). These cementite particles are captured in a square field with 20 × 20 um2 in a SEM image. It indicates that the prolonged tempering makes Mn content in most cementite particles close to equilibrium [23,24]. Few particles with a rather small size less than 100 nm were not taken into account.
### Fig. 2.
Fig. 2. Initial microstructures before austenitization: (a) Image quality maps with phase maps marking cementite in yellow; (b) orientation map; (c) SEM image showing the microstructure of one cementite particle in ferrite matrix; (d) Mn profile along the scanning line in Fig. 2(c); (e) Statistics of Mn content in cementite and its surrounding ferrite at a 20 × 20 um2 square in a SEM image.
### 3.2. Transformation kinetics
Fig. 3(a) shows the dilatometric curves of the investigated steels subjected to continuous heating at different rates from ambient temperature to austenitization temperature. With an increase in heating rate from 0.1 °C /s to 100 °C /s, the Ac1 increases from 720 °C to 776 °C, while Ac3 increases from 828 °C to 878 °C. The shape of the two curves remains similar, and the abnormal deviation from linear expansion during heating before Ac1 is considered to be caused by the switching of heating rate and the magnetic transition [25].
### Fig. 3.
Fig. 3. Relative length changes of the specimens as a function of temperature: (a) continuous heating at 0.1 °C/s and 100 °C/s to fully austenitizing temperature; (b)continuous heating at 0.1 °C/s and quenched from 775 °C and 797 °C; (c) (b)continuous heating at 100 °C/s and quenched from 827 °C, 837 °C and 857 °C.
Fig. 3(b) exhibits the dilatometric curves of the investigated steel heated at 0.1 °C /s to temperatures lower than Ac3 and then followed by quenching to room temperature. About 50 % and 80 % of austenite was formed at 775 °C and 797 °C, calculated by the lever rule [26]. Upon quenching, a clear signal of martensite transformation was detected in both samples, and martensite starting temperature was found to increase with increasing intercritical annealing temperature. Except for martensite transformation, a slight expansion of the sample was also identified at the very beginning stage of cooling, which was deduced to be caused by ferrite formation. It will be discussed later.
Fig. 3(c) shows the dilatometric curves of the investigated steel heated at 100 °C /s to various temperatures and then quenched to ambient temperature. About 50 % and 90 % of austenite were estimated to be formed at intercritical temperatures of 827 °C and 857 °C, respectively. The phase transformation behaviors during cooling are quite complex for the fast-heated samples, which is strongly dependent on intercritical annealing temperature. At the beginning of cooling, no obvious phase transformation was observed in the sample quenched from 857 °C, while a non-linear contraction due to austenite formation were detected as the sample was quenched from 827 °C or 837 °C. The abnormal austenite formation upon cooling was named as inverse transformation in Ref. [27], which is considered caused by non-equilibrium conditions at the heating-cooling reversal temperatures. After a short transitory stage, an obvious non-linear expansion due to ferrite formation can be observed in the samples quenched from 827 °C or 837 °C.
### 3.3. Resulted microstructures
Fig. 4(a) shows the microstructure of the sample heated at 0.1 °C/s to 775 °C. The austenite/martensite/cementite islands are found to be uniformly distributed in the ferrite matrix, which indicates that the ferrite/cementite interface served as the nucleation sites for austenite formation. A representative austenite/martensite/cementite island was characterized in detail using the Nano-AES-EBSD. The SEM image (Fig. 4(b)) clearly indicated that there are four layers of structures with different contrast distended from center to the edge. The image quality map taken from the same region obtained by EBSD is shown in Fig. 4(c). The undissolved cementite (in yellow) could be identified in the center, which is surrounded by austenite (in green). The dark gray region reveal at the outer shell of austenite is identified as martensite due to the poor image quality (IQ) value [28]. Ferrite can be distinguished at the outermost layer by higher IQ. The corresponding orientation map is shown in Fig. 4(d). It shows that the displayed blocky austenite belongs to the same austenite grain. The martensite/austenite interfaces with K-S orientation relationship (OR) was indicated by the black line. Using the reconstruction method of the parent austenite [29,30], a core-shell structure of a cementite particle and its enveloping austenite shell could be observed, as shown in the inset of Fig. 4(d).
### Fig. 4.
Fig. 4. (a) Micrograph of the slow-heated sample quenched from 775 °C; (b) SEM image showing the diffusion field across one cementite particle; (c) Image quality map with phase maps marking retained austenite (RA) and cementite; (d) Crystal orientation imaging maps; (e)C and Mn profiles along the line in Fig. 4(b); (f) SEM image at higher voltage mode showing the distinguished contrast in the surrounding ferrite.
C and Mn profiles along this multi-layer structure, indicated by a yellow line in Fig. 4(b), are investigated by Auger Nano Probe, as shown in Fig. 4(e). The corresponding interface positions between phases are highlighted by the yellow dash line. A carbon gradient is detected from the central cementite to the outer ferrite, where two sharp descents could be noted at the θ/γ interface and α’/α interface. The Mn distribution, indicated by the blue curve, suggests that Mn partitioning from cementite to austenite proceeded during the thermal cycle. Through the austenite and martensite, Mn content decreased from near 16 % in cementite to 1 % ~ 2 % in ferrite gradually. Fig. 4(f) shows the trace of epitaxial ferrite identified by its different contrast with surrounding ferrite due to a distinct compositional gradient [31], which is in accordance with the non-linear expansion at the beginning of quenching in Fig. 3(b).
Fig. 5(a) shows microstructure of the fast-heated (100 °C/s) samples quenched from 827 °C, where the austenite-martensite-cementite islands are also homogeneously distributed but with a smaller size compared with the slow-heated samples (see Fig. 4(a)). A representative island characterized by the Nano-AES-EBSD indicated that the fast-heating leads to a pronounced difference in multilayer structures. Only three layers of structure was found in the islands, as displayed in the SEM image with different contrasts (Fig. 5(b)). Image quality map with superposed phase color map in Fig. 5 (c) clearly highlights a core-shell structure consisting of cementite (yellow) and the surrounding austenite (green). In this island, fresh martensite was not observed, which could be confirmed by the orientation map (Fig. 5 (d)). Fig. 5(e) shows the distribution of C and Mn across the cementite, austenite and ferrite measured by Auger Nano Probe. It is seen that the carbon intensity displays a distinctive sharp gradient from cementite to austenite. Mn redistribution is marginal at the austenite/cementite interfaces, while a distinct Mn gradient was detected across the austenite/ferrite interface. The region where Mn gradient exists might be the original position of cementite/ferrite interface. Considering the very short annealing time, the Mn enriched austenite could mostly form via the migration of austenite/cementite interface into cementite. It is difficult to deduce from the experiments whether the austenite/ferrite interface has moved or not during the fast annealing.
### Fig. 5.
Fig. 5. (a) Micrograph of the fast-heated sample quenched from 827 °C; (b) SEM image showing the diffusion field across one cementite particle; (c) Image quality maps with phase maps marking RA and cementite; (d) Crystal orientation imaging maps; (e)C and Mn profiles along the line in Fig. 5(b).
Fig. 6(a) shows the IQ map of 5 islands in the sample fast-heated to 827 °C at 100 °C/s and then quenched. There is no trace of martensite at each island. The RA islands (No. 1, 2 and 5) suggest that cementite is totally dissolved. The corresponding crystal orientation imaging map is shown in Fig. 6(b). Assuming each RA island is originated from a cementite particle, we can clearly find several austenite grains could nucleate within the same cementite particle under fast-heating conditions, as shown by the RA island of No. 1, 3 and 5. The Mn profiles across islands 1-4 (indicated by the arrow line in the crystal orientation imaging map) shown in Fig. 6(c) are all quite symmetrical. A steep Mn gradient could be observed at all these austenite/ferrite interfaces. According to Fig. 6(c), the Mn contents in γ at the No. 1 and 2 RA islands (12.2 ± 1.1 wt. % and 11.8 ± 0.4 wt. %) are found to be lower than the value in islands No. 3 and 4 (14.5 ± 0.6 wt. % and 14.7 ± 1.0 wt. %) in which cementite was not dissolved completely. This indicates that the Mn content in cementite would also influence the kinetics of cementite dissolution during fast heating.
### Fig. 6.
Fig. 6. (a) Image quality map of multi-diffusion field with phase maps marking RA and cementite microstructure of fast-heated sample quenched at 827 °C; (b) Crystal orientation imaging map; (c) Mn profiles across the RA regions along the scanning line in Fig. 6(b).
Fig. 7(a, b) show the IQ map of the fast-heated samples quenched from higher temperatures, 857 °C and 900 °C, respectively. Based on the dilatation curves, about 80 % of austenite is formed at 857 °C, while the sample is fully austenized at 900 °C. The matrix of the as quenched fast heated samples is martensite, containing a small amount of RA and ferrite. The morphology of RA in the fast-heated samples quenched from 857 °C is quite similar to that of initial cementite (see Fig. 2(a)). Fig. 7 (c, d) show the image quality map of the slow-heated samples quenched from 797 °C and 900 °C, respectively. In the slow heated sample quenched from 797 °C a small amount of austenite was retained. Nevertheless, no RA is identified in the sample quenched from 900 °C. The corresponding orientation maps of the slow heated and fast heated samples in Fig. 7(e-h) indicate that fast-heating could lead to a grain refinement of parent austenite grains derived from multiple orientations of RA (highlight with dark circles) and the substructures of martensite. Fig. 7(i-k) display the typical austenite grains by SEM and the corresponding Mn distributions among each phase. In the fast-heated sample quenched from 857 °C, a tiny undissolved cementite still exists inside the large austenite grain, as shown in Fig. 7(i). The sharp Mn gradient at the RA/martensite boundary indicates less Mn has partitioned during the fast heating and quenching, and RA is formed from the original cementite. The similar microstructure has demonstrated a great potential to reach spectacular mechanical properties [32,33]. However, in the slow heated sample quenched from 797 °C shown in Fig. 7(k), cementite has almost all dissolved, and the Mn gradient has diffused out.
### Fig. 7.
Fig. 7. (a-d) Image quality map with phase maps marking RA in green; (e-h) orientation maps of the corresponding region in which RA are highlighted in dark circle; (i-k) SEM pictures of single RA covered with the Mn profile.
## 4. Simulation results and discussion
### 4.1. Growth of austenite under local equilibrium
During austenite formation from cementite and ferrite, it was experimentally found that austenite could grow with and without distinguished alloying element partitioning during continuous heating. Fig. 8 shows the schematic isothermal sections of Fe-C-Mn phase which indicate Mn diffusion-controlled growth of austenite (Fig. 8(a)), and C diffusion-controlled growth of austenite (Fig. 8(b)), which are actually both expected to occur during heating. The critical temperature at which the transformation mode switches was denoted as partition to non-partition transition temperature (PNTT) [34,35]. At a temperature below PNTT, substitutional alloying element diffusion is required to balance the carbon activity (denoted as ac) difference at γ/α and γ/θ interfaces. At temperatures above PNTT, austenite could grow without partitioning of substitutional elements via the establishment of a positive carbon activity gradient of from γ/θ interface to γ/α interface (denoted as acγ/θ and acγ/α). YMn in the Fig. 8 is the site fraction of Mn, defined mathematically as YMn=XMn/(XFe+ XMn), where XFe and XMn are mole fractions of Fe and Mn. The gap between the two carbon isoactivity lines (Δαc) in Fig. 8(b) enlarges with increasing temperature until one phase is totally dissolved.
### Fig. 8.
Fig. 8. Schematic isothermal sections of Fe-C-Mn phase diagrams to indicate the austenite growth under local equilibrium condition: (a) Mn diffusion-controlled austenite growth, e.g. Partitioning Local Equilibrium (PLE) (b) C diffusion-controlled austenite growth, e.g. Negligible Partitioning Local Equilibrium (NPLE) (PNTT: Partition to Non-partition Transition Temperature).
### 4.2. Kinetic transitions on austenite transformation
The growth of austenite under local equilibrium condition was simulated by DICTRA software, using the TCFE7 thermodynamic database and MOB2 mobility database. In DICTRA simulations, a core-shell structure consist of cementite and ferrite is set as initial structure, where it is assumed that cementite particles have a fraction of about 3.4 % according to thermodynamic calculations with an average radius of 400 nm. The size of the simulation cell is assumed to be Rcell = 2140 nm. Mn content of the initial cementite is set as 16 wt. pct according to the AES measurements, and the composition of ferrite was set according to mass balance. The PNTT in this model was calculated near 762 °C thermodynamically. As illustrated in the inset figure of Fig. 9(a), austenite is set as “inactive” phase which will form at the interface of cementite and ferrite during heating when the driving force exceeds 10-5 J/mol [15].
### Fig. 9.
Fig. 9. (a) Comparison between the kinetics of austenite formation upon heating simulated by DICTRA and measured by dilatation; (b) The γ/α and γ/θ interface position as a function of temperature upon heating with 0.1 °C/s and 100 °C/s.
Fig. 9(a) shows the simulated kinetics of austenite formation from a mixture of cementite and ferrite (dash line), which is in a qualitative agreement with the dilatometry measurements (solid line). It is worth noting that the predicted kinetic transitions are defined based on the predicted migration modes of austenite/ferrite and austenite/cementite interfaces, which can be derived from the predicted C/Mn profiles. As shown in Fig. 9(a), the simulations show that the switch of interface migration mode indeed change the kinetics of austenite formation to some extent, while it is not significant enough to be detected by dilatometry. The deviation between the simulation and experimental curves are presumably result from heterogeneous distribution of cementite particles which may influence the austenization kinetics on aspects of nucleation and hard impingement during growing. The start and finish temperatures of austenite formation are predicted to increase with heating rate increasing, which is in good agreement with experiments. Three distinct kinetic transitions during the growth of austenite were also predicted during the slow and fast heating, as marked by the blue dots in Fig. 9(a). For the slow-heated sample, austenite grows slowly from 700 °C to near 765 °C (stage I), then grows rapidly in a very short temperature interval from 765 °C to ~771 °C (stage II), and slows down thereafter until the fraction of austenite reaches 100 % at 825 °C (stage III). For the fast-heated sample, austenitization is mainly accomplished by the fast growth stage from ~770 °C to ~837 °C (stage II). Before 770 °C, it is seen that the growth of austenite is extremely restrained (stage I). It is noteworthy that the measured Ac3 (878 °C) at fast-heating is a little bit higher than the predicted missive transformation transition temperature (870 °C) for ferrite with 1 wt. % Mn [36]. However, it is quite challenging to directly prove the presence of massive transformation via experiments as the austenite/ferrite and austenite/cementite interfaces migrate cooperatively.
Fig. 9(b) displays the position of the γ/α and γ/θ interface during continuous heating. The Y axis represents the radius direction of the cell. Austenite starts to nucleate at the position of 0.4 um, then grows into cementite and into ferrite via the γ/α and γ/θ interfaces. It is clearly shown that there exists a critical temperature (PNTT) for the migration of both γ/α and γ/θ interfaces during continuous heating, above which the kinetics of both interfaces remarkably speeds up. For the slow-heated case, in the stage I, it is predicted that the γ/α interface starts to move into ferrite, while the γ/θ interface is almost immobile. Both γ/θ and γ/α interfaces migrate in the stage II. Cementite dissolves completely at the end of stage II, while the γ/α interface continues to migrate sluggishly in the stage III. For the fast-heated case, the migrations of the γ/θ and γ/α interfaces are negligible below ~770 °C, but then migrate synchronously towards the opposite direction as the temperature increases. Different from the slow heating case, it is predicted that cementite could not fully dissolve even after the full dissolution of ferrite, which results in the third stage for the dissolution of the remaining cementite. It is also in good agreement with the experimental observation in Fig. 7.
The evolution of the C and Mn profiles for the slow-heated and fast-heated sample during heating are presented in Fig. 10. For the slow heated case as shown in Fig. 10(a) and (b), at 750 °C (stage I), Mn in both cementite and ferrite diffused into the newly formed austenite, indicating that the migration of both γ/θ and γ/α interfaces are controlled by Mn partitioning, e.g. PLE mode. A carbon gradient is predicted to exist in austenite at 750 °C due to the existence of Mn gradient as carbon activity is Mn content dependent. At 770 °C (stage II), there are very sharp Mn spikes at both γ/θ and γ/α interfaces, and thus the migrations of both interfaces are controlled by carbon diffusion. A smooth distribution of Mn is left in the austenite which verified a long period PLE controlled austenite formation at the slow heating rate. Therefore, the migration mode for both interfaces switches from PLE into NPLE mode during heating from 750 °C to 770 °C. At 780 °C (stage III), cementite has totally dissolved. The Mn profile exhibits a zigzag shape at the γ/α interface, which means that the migration of γ/α interface is controlled by Mn diffusion, e.g. PLE mode.
### Fig. 10.
Fig. 10. (a, b) Evolution of C and Mn profiles during continuous heating at 0.1 °C /s; (c, d) Evolution of C and Mn profiles during continuous heating at 100 °C /s.
The kinetic transitions of the γ/θ and γ/α interfaces for the fast-heated case are different from those for the slow-heated case. As shown in Fig. 10(c) and (d), at 760 °C, both γ/θ and γ/α interfaces migrate under PLE mode, and the migration of the interfaces is very sluggish. At 800 °C and 820 °C, positive C gradients are built up in austenite, demonstrating the migration mode is NPLE. Upon heating, a decrease of C content at γ/α interface but an increase at γ/θ interface is predicted, corresponding to the thermodynamic conditions as explained in section 4.1. Upon fast-heating, it is predicted that there is almost no Mn redistribution at the migrating γ/α and γ/θ interfaces, and the steep concentration gradients of Mn at θ/α interface is fully inherited by the austenite and yields a chemical boundary for both Mn and C within austenite.
For the austenite growth under NPLE mode during heating, the migrating velocity of austenite/cementite and austenite/ferrite interfaces based on mass balance can be described respectively by:
${{v}_{\gamma /\theta }}=\frac{J_{C}^{\gamma }-J_{C}^{\theta }}{{{x}_{\theta /\gamma }}-{{x}_{\gamma /\theta }}}\approx \frac{J_{C}^{\gamma }}{{{x}_{\theta /\gamma }}-{{x}_{\gamma /\theta }}}$
${{v}_{\gamma /\alpha }}=\frac{J_{C}^{\gamma }-J_{C}^{\alpha }}{{{x}_{\gamma /\alpha }}-{{x}_{\alpha /\gamma }}}\approx \frac{J_{C}^{\gamma }}{{{x}_{\gamma /\alpha }}}$
where xθ/γ (≈25 at. %) is the carbon content in cementite at the cementite/austenite interface which is not temperature dependent. It is clearly shown that the competition between cementite dissolution (vγ/θ) and ferrite dissolution (vγ/α) is mainly dependent on the evolution of xγ/θ and xγ/α during heating. With increasing temperature, xγ/θ increases while xγ/α decreases, which results in a higher dissolution kinetics of cementite and ferrite. The DICTRA simulation results further indicate that the ratio of vγ/θ to vγ/α decreases with increasing heating rate. As a result, the cementite is remained to a higher annealing temperature when heating rate was increased from 0.1 °C /s to 100 °C /s. In addition to heating rate, the dependence of xγ/θ and xγ/α on temperature are also affected by Mn redistribution between cementite and ferrite, which is expected to affect the interface migration during heating. The influence of Mn redistribution on the austenite reversion from ferrite and cementite during fast heating should also be investigated in the future.
### 4.3. Kinetic transitions on cooling process
Fig. 11 presents simulation results of the austenite formation kinetics as a function of temperature during the intercritical thermal cycle. The intercritical temperatures are selected to be near the experimental intercritical temperature in section 3.3, with a similar volume fraction of austenite. Fig. 11(a) indicates the predicted kinetics of the γ/θ and γ/α interface migration during heating and cooling for the slow heated cases. The γ/α interface is predicted to be stagnant at the beginning of cooling and then start to migrate backward into austenite, which leads to the formation of epitaxial ferrite as experimentally observed in Fig. 4(f). Based on the evolution of C and Mn profiles in Fig. 11(b) and (c), there is a kinetic transition for the γ/α interface from PLE to NPLE during cooling, and the stagnant stage was because interface migration is in PLE mode. Similar kinetic transition for the γ/α interface was also found during the cyclic phase transformations [27,37]. As shown in Fig. 11(d), for the fast-heated case, the γ/α and γ/θ interfaces respectively migrate into ferrite and cementite at the beginning of cooling, which leads to the inverse austenite formation. The inverse transformation was also observed by dilatometer experiments as shown in Fig. 3(c). Similar as the slow heated case, the γ/α interface was also predicted to migrate backward into austenite. In both slow and fast heating cases, γ/θ interfaces are predicted to be nearly immobile during cooling, which means that the γ/θ interface migration is always controlled by Mn diffusion, as indicated in the inset of Fig. 11(c, f).
### Fig. 11.
Fig. 11. The predicted γ/α and γ/θ interface position as a function of temperature during heating and cooling and the corresponding elements distribution: (a-c) the slow-heated case, (d-f) the fast-heated case.
It is predicted that heating rate plays an important role in C and Mn distribution in the newly formed austenite, which is expected to affect phase transformation upon the subsequent cooling. For the slow heated case, a smooth Mn gradient from nearly 16 to 1% was predicted in austenite near the γ/θ interface (as shown in the insert of Fig. 11(c)), which is accompanied with a carbon gradient. Given that the inhomogeneous C and Mn content in austenite, the outer ring of austenite with the lowest Mn and C enrichment would transform into ferrite, while the inner ring with a significant enrichment of Mn and C can be stabilized to ambient temperature. An intermediate enrichment of C and Mn in the middle ring of austenite could suppress ferrite formation upon cooling, while it is not enough to suppress martensite transformation. Therefore, a mixed microstructure consisting of cementite/austenite/martensite/ferrite is predicted to form, which is in good agreement with experiments (see, Fig. 4). In contrast to the slow-heated case, Mn concentration in cementite and ferrite were almost freezed and inherited by austenite for the fast-heated case. During cooling, the austenite/ferrite interface migrates into austenite quickly until it was blocked by the high Mn content inherited from previous cementite. Austenite was retained to ambient temperature due to the significant C and Mn enrichment, which results in the microstructure consisting of cementite/austenite/ferrite or austenite/ferrite (see Fig. 5, Fig. 6).
Fig. 12 summarized the evolution of microstructure and Mn distribution during thermal cycles. Both fast-heating and slow-heating could lead to chemical gradients within austenite due to the kinetic mismatch between elements diffusion and austenite formation, while the sharpness of chemical gradient is strongly dependent on heating rate and heating temperatures. The sharpness of chemical gradient within austenite would then significantly affect austenite decomposition during cooling. Therefore, heating rate and heating temperature can be used to adjust chemical heterogeneity within austenite and further tailor the final microstructure.
### Fig. 12.
Fig. 12. A sketch of the evolution of microstructure and Mn distribution during the thermal cycles.
## 5. Conclusion
In this study, austenite formation from a mixture of ferrite and spheroidized cementite with a significant Mn partitioning in an Fe-0.23C-1.54 Mn alloy were systematically investigated, and the effect of heating rate on kinetic transition and elements partitioning was discussed in detail. Austenite growth was found to proceed via PLE (γ/α, γ/θ), NPLE (γ/α, γ/θ) and PLE (γ/α) controlled interfaces migration during slow-heating, while NPLE (γ/α, γ/θ) mode predominately controlled the austenitization upon fast-heating through a synchronous dissolution of ferrite and cementite. It was both experimentally and theoretically found that after austenite growth Mn distributions within austenite grains of slow and fast heated samples are inhomogeneous. For the fast-heated sample, the significant enrichment of Mn in original cementite particles was fully inherited by newly formed austenite, which leads to sharp Mn gradients within austenite grains. However, it diffuses out in the slow-heated sample due to the long-range Mn partitioning during austenite growth. The different Mn distribution within austenite grains of the slow and fast heated samples was found to play a significant role in phase transformations upon the subsequent cooling. The localized enrichment of Mn within austenite grains could suppress ferrite or martensite formation upon cooling, which could help stabilize the ultrafine austenite to ambient temperature. Therefore, it is expected that heating rate can be used as an effective parameter to tune the magnitude of local Mn enrichment and further tailor the microstructure of steels.
## Acknowledgments
H. Chen acknowledges financial support from the National Natural Science Foundation of China (Grant U1860109, 51922054, U1808208 and U1764252) and Beijing Natural Science Foundation (2182024). Z. G. Yang acknowledges financial support from the National Natural Science Foundation of China (Grant 51771100). C. Zhang acknowledges financial support from the National Natural Science Foundation of China (Grant 51771097) and the Science Challenge Project (Grant TZ2018004). G. Liu acknowledges financial support from China postdoctoral science foundation (2018M631459).
## Reference By original order By published year By cited within times By Impact factor
J. Huang, W.J. Poole, M. Militzer, Metall. Mater. Trans. A 35 (11) (2004) 3363-3375.
R. Wei, M. Enomoto, R. Hadian, H.S. Zurob, G.R. Purdy, Acta Mater. 61 (2) (2013) 697-707.
S.S. Sohn, B.J. Lee, S. Lee, N.J. Kim, J.H. Kwak, Acta Mater. 61 (13) (2013) 5050-5066.
X. Zhang, G. Miyamoto, T. Kaneshita, Y. Yoshida, Y. Toji, T. Furuhara, Acta Mater. 154 (2018) 1-13.
D.V. Shtansky, K. Nakai, Y. Ohmori, Acta Mater. 47 (9) (1999) 2619-2632.
Z.D. Li, G. Miyamoto, Z.G. Yang, T. Furuhara, Metall. Mater. Trans. A 42 (6) (2010) 1586-1596.
G.R. Speich, V.A. Demarest, R.L. Miller, Metall. Trans. A 12 (8) (1981) 1419-1428.
Z. Li, Z. Wen, F. Su, R. Zhang, Z. Zhou, J. Alloys. Compd. 727 (2017) 1050-1056.
U.R. Lenel R.W.K. Honeycombe, Met. Sci. 18 (1984) 503-510.
M. Hillert, K. Nilsson, L. Torndahl, J. Iron Steel Inst. 209 (1) (1971) 49-66.
G. Miyamoto, H. Usuki, Z.D. Li, T. Furuhara, Acta Mater. 58 (13) (2010) 4492-4502.
AbstractThe effects of addition of Si, Mn and Cr on the kinetics of reverse transformation at 1073 K from the spheroidized cementite structure obtained by heavy tempering of high carbon martensite are investigated. The rate for reverse transformation is the fastest in the Fe–0.6 mass% C binary material, and becomes slower with the addition of Mn, Si and Cr. In particular, the retarding effect of Cr addition is remarkable, and holding times orders of magnitude longer than for other alloys are necessary to complete the reverse transformation. Based on thermodynamics and TEM/EDS analyses, it is supposed that austenite growth is controlled by carbon diffusion in specimens with Si and Mn added, as well as the Fe–0.6 mass% C binary material, while a decrease in the carbon activity gradient with the addition of these elements results in slower reversion kinetics. However, in the Cr-added specimen, Cr diffusion is necessary for austenite growth, yielding extremely sluggish reaction kinetics.]]>
J. Emo, P. Maugis, A. Perlade, Comput. Mater. Sci. 125 (2016) 206-217.
Q. Lai, M. Gouné, A. Perlade, T. Pardoen, P. Jacques, O. Bouaziz, Y. Bréchet, Metall. Mater. Trans. A 47 (7) (2016) 3375-3386.
M. Enomoto, S. Li, Z.N. Yang, C. Zhang, Z.G. Yang, Calphad 61 (2018) 116-125.
F. Huyan, J.Y. Yan, L. Höglund, J. Ågren, A. Borgenstam, Metall. Mater. Trans. A 49 (4) (2018) 1053-1060.
C. Lesch, P. Álvarez, W. Bleck J. Gil Sevillano, Metall. Mater. Trans. A 38 (9) (2007) 1882-1890.
T. Lolla, G. Cola, B. Narayanan, B. Alexandrov, S.S. Babu, Mater. Sci. Technol. 27 (5) (2013) 863-875.
D. De Knijf, A. Puype, C. Föjer, R. Petrov, Mater. Sci. Eng., A 627 (2015) 182-190.
F.C. Cerda, C. Goulas, I. Sabirov, S. Papaefthymiou, A. Monsalve, R.H. Petrov, Mater. Sci. Eng., A 672 (2016) 108-120.
G. Liu, S. Zhang, J. Li, J. Wang, Q. Meng, Mater. Sci. Eng., A 669 (2016) 387-395.
W.W. Sun, Y.X. Wu, S.C. Yang, C.R. Hutchinson, Scripta Mater 146 (2018) 60-63.
R. Ding, Z. Dai, M. Huang, Z. Yang, C. Zhang, H. Chen, Acta Mater. 147 (2018) 59-69.
G. Miyamoto, J.C. Oh, K. Hono, T. Furuhara, T. Maki, Acta Mater. 55 (15) (2007) 5027-5038.
Y.X. Wu, W.W. Sun, M.J. Styles, A. Arlazarov, C.R. Hutchinson, Acta Mater. 159 (2018) 209-224.
J. Park, M. Jung, Y.K. Lee, J. Magn, Magn. Mater. 377 (2015) 193-196.
Y.C. Liu, F. Sommer, E.J. Mittemeijer, Acta Mater. 51 (2) (2003) 507-519.
AbstractThe γ→α phase transformation behaviours of Fe-Co and Fe-Mn alloys were systematically investigated by dilatometry and Differential Thermal Analysis (DTA). Two kinds of transformation kinetics, called normal and abnormal, were recognized for the first time and classified according to the variation of the ferrite formation rate. These transformation characteristics were observed for both isothermally and isochronally conducted annealing experiments. A transition, from abnormal to normal transformation kinetics, occurs for Fe-1.79at.%Co when successive heat treatment cycles are executed, which contrasts with Fe-2.26at.%Mn for which only normal transformation kinetics occurs after each of all successive heat treatment cycles. A possible mechanism for the appearance of abnormal transformation kinetics is given, which is based on the austenite grain size. Light microscopical analysis indicates a repeated nucleation of ferrite in front of the migrating γ/α interface.]]>
H. Chen, B. Appolaire S. van der Zwaag, Acta Mater. 59 (17) (2011) 6751-6760.
A series of cyclic partial phase transformation experiments has been performed to investigate the growth kinetics of the austenite to ferrite phase transformation, and vice versa, in Fe-Mn-C alloys. Unlike the usual phase transformation experiments (100% parent phase -> 100% new phase), in the case of cyclic partial transformations two special stages are observed: a stagnant stage in which the degree of transformation does not vary while the temperature changes, and an inverse phase transformation stage, during which the phase transformation proceeds in a direction contradictory to the temperature change. The experimental results have been analyzed using paraequilibrium and local equilibrium diffusional growth models. Only the local equilibrium model was shown to predict the new features of the cyclic phase transformation kinetics. The stagnant stage was found to be due to Mn partitioning, while the inverse phase transformation is caused by non-equilibrium conditions when switching from cooling to heating and vice versa. (C) 2011 Acta Materialia Inc. Published by Elsevier Ltd.
M.J. Santofimia, L. Zhao, R. Petrov, C. Kwakernaak, W.G. Sloof, J. Sietsma, Acta Mater. 59 (15) (2011) 6059-6068.
This paper presents a detailed characterization of the microstructural development of a new quenching and partitioning (Q&P) steel. Q&P treatments, starting from full austenitization, were applied to the developed steel, leading to microstructures containing volume fractions of retained austenite of up to 0.15. The austenite was distributed as films in between the martensite laths. Analysis demonstrates that, in this material, stabilization of austenite can be achieved at significantly shorter time scales via the Q&P route than is possible via a bainitic isothermal holding. The results showed that the thermal stabilization of austenite during the partitioning step is not necessarily accompanied by a significant expansion of the material. This implies that the process of carbon partitioning from martensite to austenite occurs across low-mobility martensite-austenite interfaces. The amount of martensite formed during the first quench has been quantified. Unlike martensite formed in the final quench, this martensite was found to be tempered during partitioning. Measured volume fractions of retained austenite after different treatments were compared with simulations using model descriptions for carbon partitioning from martensite to austenite. Simulation results confirmed that the carbon partitioning takes place at low-mobility martensite-austenite interfaces. (C) 2011 Acta Materialia Inc. Published by Elsevier Ltd.
C. Cayron, J. Appl. Crystallogr. 40 (Pt 6) (2007) 1183-1188.
A computer program called ARPGE written in Python uses the theoretical results generated by the computer program GenOVa to automatically reconstruct the parent grains from electron backscatter diffraction data obtained on phase transition materials with or without residual parent phase. The misorientations between daughter grains are identified with operators, the daughter grains are identified with indexed variants, the orientations of the parent grains are determined, and some statistics on the variants and operators are established. Some examples with martensitic transformations in iron and titanium alloys were treated. Variant selection phenomena were revealed.
G. Miyamoto, N. Iwata, N. Takayama, T. Furuhara, Acta Mater. 58 (19) (2010) 6393-6403.
AbstractA new method is developed for reconstruction of the local orientation of the parent austenite based on the orientation of lath martensite measured by electron backscattered diffraction. The local orientation of austenite was obtained by least squares fitting as the difference between the experimental data and the predicted martensite orientation was minimal, assuming the specific orientation relationship (OR) between martensite and the parent austenite. First, the average OR between austenite and lath martensite was precisely determined and it was shown that both close packed planes and directions between martensite and the parent austenite deviated by more than 1° in low carbon martensite. The quality of the reconstructed austenite orientation map depended strongly on the OR used for the calculation. When Kurdjumov–Sachs (K–S) or Nishiyama–Wasserman (N–W) ORs were used the austenite orientation was frequently mis-indexed as a twin orientation with respect to the true orientation because of the mirror symmetry of (0 1 1)α stacking in the K–S or N–W ORs. In contrast, the frequency of mis-indexing was significantly reduced by using the measured OR, where the close packed planes and directions were not parallel. The deformation structure in austenite was successfully reconstructed by applying the proposed method to ausformed martensite in low carbon steel.]]>
M.J. Santofimia, L. Zhao, J. Sietsma, Metall. Mater. Trans. A 40 (1) (2008) 46-57.
M. Belde, H. Springer, G. Inden, D. Raabe, Acta Mater. 86 (2015) 1-14.
M. Belde, H. Springer, D. Raabe, Acta Mater. 113 (2016) 19-31.
Y. Xia, M. Enomoto, Z. Yang, Z. Li, C. Zhang, Philos. Mag. 93 (9) (2013) 1095-1109.
Z.N. Yang, Y. Xia, M. Enomoto, C. Zhang, Z.G. Yang, Metall. Mater. Trans. A 47 (3) (2015) 1019-1027.
J. Zhu, H. Luo, Z. Yang, C. Zhang, S. van der Zwaag, H. Chen, Acta Mater. 133 (2017) 258-268.
H. Chen, M. Gounê S.V.D. Zwaag, Comput. Mater. Sci. 55 (2012) 34-43.
ISSN: 1005-0302
CN: 21-1315/TG
Editorial Office: Journal of Materials Science & Technology , 72 Wenhua Rd.,
Shenyang 110016, China
Tel: +86-24-83978208
E-mail:JMST@imr.ac.cn
/
〈 〉 | {} |
Lattice QCD is Easy to Parallelize
Regular grid of lattice points in spacetime
Sparse matrix inversion
Blue lines with arrows represent 3 x 3 complex matrices, i.e., "links"
Green dots are 3-component complex vectors, i.e., "matter fields"
Regular communication pattern
• Boundary values to neighboring nodes (domain decomposition)
• Global sums for dot products
Spacetime has four dimensions, however, for simplicity we only show neighbors in two dimensions.
Assume L4 sites/node.
Data stored so that at each site there are link matrices to connect the current site and neighboring sites in positive directions, and there is the vector matter field. (A link is pointing in a positive direction, is stored at its tail end site.) A link pointing in a negative direction, is the adjoint of the matrix with the link pointing the opposite way. That means it is actually stored at its head end.
Thus, in the diagram below, the red links and the black vectors are stored at different sites and the vectors must be gathered to the central site. On the other hand, each blue link and the corresponding green vector are stored together, but the temporary result vector stored at that site will have to be moved to the central site to accumulate all the contributions to the final result at the central site.
Because of the domain decomposition, whenever a result has to be gathered or scattered the values for sites at the edges of the domain that must be moved to another domain will put into a message and sent to the node that owns the other domain.
There is an even-odd, or red-black decomposition of the problem, so that if the central site is even, then the neighboring sites at the ends of the arrows are odd.
Let's assume that the central site is even. The strategy to overlap communication and computation is to start to gather the vectors from the odd sites, i.e, the black dots at the end of the red arrows. While those messages are passed, we start the local computation of multiplying the blue links by the green vectors for all odd sites. (This computation must be done for all four directions of links.) Once these local computations are done, we start sending the results which are at odd sites to the even sites at which they are needed. Before we can start the second stage of the computation, we must wait until the first set of messages have arrived (the black dots). We then start the second stage of the computation multiplying the red links at even sites by the vectors that have just arrived. Then, we wait until the matrix-vector products from the first stage of the computations have arrived and accumulate all the results.
We summarize the tasks during the first stage in the three steps below, and calculate the time for each step assuming the bandwidth for passing messages is MB and the rate for matrix times vector is MF. Note that MB and MF are achieved rates, not maximum rates.
a) For every odd site, M × V in each negative direction
b) For every even site on (+) boundary, receive a vector
c) For every odd site on (-) boundary, send a vector
ta = $L4/ 2 × 66 Flops × 4/MF$ = $132 L4Flop/ MF$ (microsec) tb = L3 / 2 × 24 bytes × 4 / MB = 48 L3 bytes / MB (microsec) tc = tb
To completely overlap computation and communication, require that ta = tb
132 L4 / MF = 48L3 / MB or
MB = 48 MF/ (132 L) = 0.364 MF / L
If the communications is not fast enough you must wait for message to arrive. We are assuming full duplex communication, so that steps b and c can proceed at the same time.
This is a simplification! | {} |
Search for scalar quarks in $e^{+}e^{-}$ collisions at $\sqrt{s}$ up to 209 GeV
Abstract : Search for Scalar Quarks in e+e- Collisions at sqrt(s) up to 209 GeV Searches for scalar top, scalar bottom and mass-degenerate scalar quarks are performed in the data collected by the ALEPH detector at LEP, at centre-of-mass energies up to 209 GeV, corresponding to an integrated luminosity of 675 pb-1. No evidence for the production of such particles is found in the decay channels stop->c/u chi, stop->b l snu, sbottom-> b chi, squark-> q chi or in the stop four-body decay channel stop-> b chi f f' studied for the first time at LEP. The results of these searches yield improved mass lower limits. In particular, an absolute lower limit of 63GeV/c2 is obtained for the stop mass, at 95% confidence level, irrespective of stop lifetime and decay branching ratios.
Document type :
Journal articles
Cited literature [28 references]
http://hal.in2p3.fr/in2p3-00011717
Contributor : Magali Damoiseaux <>
Submitted on : Monday, July 1, 2002 - 12:12:31 PM
Last modification on : Tuesday, April 20, 2021 - 12:00:05 PM
Long-term archiving on: : Tuesday, June 2, 2015 - 12:31:17 PM
Identifiers
• HAL Id : in2p3-00011717, version 1
Citation
A. Heister, S. Schael, R. Barate, R. Bruneliere, I. de Bonis, et al.. Search for scalar quarks in $e^{+}e^{-}$ collisions at $\sqrt{s}$ up to 209 GeV. Physics Letters B, Elsevier, 2002, 537, pp.5-20. ⟨in2p3-00011717⟩
Record views | {} |
If A and B represent two vectors, then the dot product is obtained by A.B. Expand and simplify to obtain the equation of the line In this chapter we study the geometry of 3-dimensional space. Prove that the line PQ and RS are parallel. In the video below: We will use the properties of parallelograms to determine if we have enough information to prove a given quadrilateral is a parallelogram. Prove that the line PQ and RS are parallel. answr. Send your complaint to our designated agent at: Charles Cohn Remember how to test for parallel lines. Preview. Jan 16,2021 - How to prove that two vectors are parallel? BA = (-2 - 2 , k - 3 ) = (-4 , k - 3) BC = (2 k - 2 , -4 - 3 ) = (2 k - 2 , -7) Well, and this is the general pattern for a lot of these vector proofs. The dot product gives us a very nice method for determining if two vectors are perpendicular and it will give another method for determining when two vectors are parallel. which specific portion of the question – an image, a link, the text, etc – your complaint refers to; as An identification of the copyright claimed to have been infringed; if the dot product of two vectors is zero then they are parallel a.b=0. Since the ratios are not equal, the planes are not parallel. Includes full solutions and score reporting. Author: Created by weteachmaths. Coplanar vectors are the vectors which lie on the same plane, in a three-dimensional space. Equality Of Column Vectors. We can find the cross product of both the vectors. You might then have had the good idea to try to prove the other pair of sides parallel so you could use the first parallelogram proof method. Geometric problems can be solved using the rules for adding and subtracting vectors and multiplying vectors by a scalar. Doing so provides a “picture” of the point that is truly worth a thousand words. Combining these two definitions we can formulate a suitable definition of what it means for two vectors to be parallel. ABC is a right triangle at B if and only if vectors BA and BC are perpendicular. Determine the position vectors, OG and OH, given that G and H are the midpoints of PQ and PS respectively. Since this is impossible, the equations have no solution. So we know that AB is parallel to CD by alternate interior angles of a transversal intersecting parallel lines. Determine the position vectors, OG and OH, given that G and H are the midpoints So can you find a k value so that 1, 2, 4 is a scalar multiple of 2, 1, 3? For instance, for i+2j and -3i-6j, we can rewrite the latter vector as -3(i+2j) hence they are parallel… Equal vectors are vectors that have the same magnitude and the same direction. Created: Nov 18, 2015 | Updated: Aug 31, 2018. the above vectors are parallel coz a.b = -5+20-15 = 0 Given vector U = (2 , -5), find. If you believe that content available by means of the Website (as defined in our Terms of Service) infringes one Let us first find the components of vectors BA and BC given the coordinates of the three points. … If the two vectors are parallel, then q equals 0 degrees; cos 0 equals 1. Question 5 In the diagram the vectors have the same magnitude because the arrows are the same length and they have the same direction.They are all parallel to the $$x$$-direction and parallel to each other.. Expand and simplify to obtain the equation of the line link to the specific question (not just the name of the question) that contains the content and a description of Hence XY is parallel to a Which of the following pairs of vectors are perpendicular? Well, and this is the general pattern for a lot of these vector proofs. Equality Of Column Vectors. Infringement Notice, it will make a good faith attempt to contact the party that made such content available by If the planes are neither parallel nor perpendicular, find the angle between the planes. The j component divided by the i component equals the slope of a parallel line.3Find the slope of a parallel line to vector a The slope of this line equals a2/a1.4Find the slope of a perpendicular line. To say whether the planes are parallel, we’ll set up our ratio inequality using the direction numbers from their normal vectors.???\frac31=\frac{-1}{4}=\frac23??? A statement by you: (a) that you believe in good faith that the use of the content that you claim to infringe (x + 2) (2) + (y + 3)(-5) = 0 The cross product of the 2 given vectors is (12 , -8 , 0) not equal to the zero vector (0, 0 , 0) Conclusion: The 2 vectors are not parallel. Homework Statement The diagonals of quadrilateral ABCD bisect each other. Please be advised that you will be liable for damages (including costs and attorneys’ fees) if you materially Two non-null vectors u and v are parallel . The first is parallel vectors. Note that when the vectors are equal, the directed line segments are parallel. Let's locate a corner of the parallelogram at the origin. + = or. Vectors AM = (x - 1 , y - 1) and U = (2 , -5) are parallel if and only if And if you look at it that way, then you immediately see that angle DBC right over here is going to be congruent to angle ADB for the exact same reason. To prove that two vectors are parallel, you basically need to show that you can write them as the same vector, multiplied by different scalars. x 2 - 6 x + y 2 + 5y + 14 = 0, eval(ez_write_tag([[300,250],'problemsphysics_com-large-mobile-banner-1','ezslot_5',700,'0','0'])); the Definition. Solution to Question 5 Kalamazoo College, Bachelor in Arts, Music. Two vectors are parallel if they have the same direction or are in exactly opposite directions. And two vectors are perpendicular if and only if their scalar product is equal to zero. Vectors describe movement with both direction and magnitude. You might then have had the good idea to try to prove the other pair of sides parallel so you could use the first parallelogram proof method. such that u = nv. We can always find in a plane any two random vectors, which are coplanar. The direction vectors have to be scalar multiples of one another. Construct the planes' normal vectors $a_1 \hat x + b_1 \hat y + c_1 \hat z$ for the first plane and similarly for the second. Thus, if you are not sure content located With the help of the community we can continue to Solution to Question 6 Let us first find the components of vectors BM. Answer verified by Toppr . cos q, where "q" represents the angle between the two vectors. So s i n 0 = 0 o. It was previously proved that XY = (a + b) / 2, therefore XY = (a + na)/2 = (n + 1)a / 2 = ma, where m = (n+1)/2, a real number. 1). They can be added or subtracted to produce resultant vectors. if the dot product of two vectors is zero then they are parallel a.b=0. Figure 4.2.5. Let's locate a corner of the parallelogram at the origin. Answer verified by Toppr . (the 2 vectors are linearly independent) ChillingEffects.org. St. Louis, MO 63105. The scalar product can be used to find the angle between vectors. For two vectors, and to be parallel, , for some real number . perpendicular lines dot product direction vectors parallel lines scalar multiple skew lines Precalculus Vectors and Parametric Equations A tip from Math Bits says, if we can show that one set of opposite sides are both parallel and congruent, which in turn indicates that the polygon is a parallelogram, this will save time when working a proof.. Another definition says that “ parallel lines have the same gradient ”, a principle you learn in coordinate geometry. the cross product of w1 and w2 is -9 + 2 + 3 + 4 = 0 So the set of vectors is orthogonal. If two vectors are equal then their vector columns are equal. Example. Determine the vectors GH and QS. When 2 vectors are added or subtracted the vector produced is called the resultant. The cross product of parallel vectors is the null vector. Posing the parallelogram law precisely. Remember that scalars only effect the magnitude rather than the direction. Skew lines are non-parallel non-intersecting lines. To say whether the planes are perpendicular, we’ll take the dot product of their normal vectors. (2 - x)(4 - x) + (-2 - y)(-3 - y) = 0 Example: The column vectors p and q are defined by You can do this by proving the triangles congruent, using CPCTC, and then using alternate interior angles VQR and QVU, but assume, for the sake of argument, that you didn’t realize this. Vectors parallel to the same plane, or lie on the same plane are called coplanar vectors (Fig. In the video below: We will use the properties of parallelograms to determine if we have enough information to prove a given quadrilateral is a parallelogram. Fig. If two vectors are parallel, then angle between those vector must be equal to 0 o. find the cross product of the 2 vectors. So let’s see if this direction vector is a scalar multiple of this one. 1 It is always possible to find a plane parallel to the two random vectors, in that any two vectors … Theorem: The median (or mid-segment) of a trapezoid is parallel to each base and its length is one half the sum of the lengths of the bases. In this lesson, we will learn how to prove whether vectors are parallel and whether points are collinear. a | EduRev Class 11 Question is disucussed on EduRev Study Group by 168 Class 11 Students. MB = 0 is written using the components of the two vectors Two vectors are perpendicular when their dot product equals to . We just checked that the vectors ~v 1 = 1 0 −1 ,~v 2 = √1 2 1 ,~v 3 = 1 − √ 2 1 are mutually orthogonal. The resultant is identified by a double arrowhead. If two vectors are equal then their vector columns are equal. Upvote(2) Was this answer helpful? So s i n 0 = 0 o. Two vectors V and Q are said to be parallel or propotional when each vector is a scalar multiple of the other and neither is zero. 101 S. Hanley Rd, Suite 300 So this must be parallel to that. if and only if there exists a real number n . 2Recognize the similarity of slope and the component form of vectors. Thus, if the vectors are anti-parallel, q equals 180 degrees; cos 180 equals -1. we can choose two points on each line (depending on how the lines and equations are presented), then for each pair of points, subtract the coordinates to get the displacement vector. If A and B represent two vectors, then the dot product is obtained by A.B. That the order that I take the dot product doesn't matter. Another definition says that “ parallel lines have the same gradient ”, a principle you learn in coordinate geometry. These are vectors which are parallel to the same plane. Advanced Vectors - Proving straight lines and parallel lines (GCSE Maths 9-1) 4.8 20 customer reviews. Free practice questions for Precalculus - Determine if Two Vectors Are Parallel or Perpendicular. answr. Vectors BM = (x + 2 , y + 3) and U = (2 , -5) are perpendiclur if and only if 1 It is always possible to find a plane parallel to the two random vectors, in that any two vectors … Equal vectors are vectors that have the same magnitude and the same direction. Remember that scalars only effect the magnitude rather than the direction. Please follow these steps to file a notice: A physical or electronic signature of the copyright owner or a person authorized to act on their behalf; When we performed scalar multiplication we generated new vectors that were parallel to … We also know that angle-- let me get this right. question (a) i think proves this. Plugging this into the first equation says $-2+3(-2s)=2-6s$ so that $-4-6s=-6s$ so that $-4=0$. Note that when the vectors are equal, the directed line segments are parallel. Created: Nov 18, 2015 | … We can find the cross product of both the vectors. Your name, address, telephone number and email address; and Triangle Law: To add two vectors you apply the first vector and then the second. Let's just write out the vectors. Let us first find the components of vectors AM. b) the equation of the line through point B(-2 , -3) and perpendicular to vector U. Equal vectors may start at different positions. Fig. Now, we can use that exact same logic. This means the lines are parallel. b) A point M(x , y) is on the line through point B(-2 , -3) and perpendicular to vector U = (2 , -5) if and only if the vectors BM and U are perpendicular. Let's say that this is … Recall how to find the dot product of two vectors and. Conditions for Coplanar vectors If you've found an issue with this question, please let us know. Your Infringement Notice may be forwarded to the party that made the content available or to third parties such The book says that's false. Given that vectors a and b are parallel, a = nb. So v will look like v1, v2, all the way down to vn. Find the equation of the tangents through the point D(2 , 4) to the circle of center C(0 , 0) and radius 2. Now, if two vectors are orthogonal then we know that the angle between them is 90 degrees. sufficient detail to permit Varsity Tutors to find and positively identify that content; for example we require means of the most recent email address, if any, provided by such party to Varsity Tutors. A = k B, k is a constant not equal to zero. University for Music and the Performing Arts, Diploma, Music Theory and Compositi... College of Saint Benedict, Bachelor in Arts, Mathematics. If Varsity Tutors takes action in response to How do you prove the trapezoid median theorem using vectors. Vectors Proving parallel and collinear/ class worksheet (b) The coordinates Of the vertices of A PQS are PO, 5), Q (4, —1) and S(6, O). Well we could try k equals 1/2, and if that doesn’t work then they are not parallel. Varsity Tutors. I want to prove to myself that that is equal to w dot v. And so, how do we do that? Now, if two vectors are orthogonal then we know that the angle between them is 90 degrees. This tends to appear in vector proofs in the following ways: If you find in your workings that one vector is a multiple of the other, then you know that the two vectors are parallel. Now, recall again the geometric interpretation of scalar multiplication. The cross product of parallel vectors is the null vector. Kansas State University, Bachelor of Science, Mathematics. Question 6 the above vectors are parallel coz a.b = -5+20-15 = 0 a + b = c. Subtracting a vector is the same as adding its inverse. information described below to the designated agent listed below. this isn't the answer but i cant remember how to do this, but what you have to do is prove the FC and ED are of the same length. Equal vectors may start at different positions. Two vectors A and B are parallel if and only if they are scalar multiples of one another. hence proving that 2 parral lines are the same length in a box shows the the other 2 lines are parrel. a – b is the same as a + (-b) Parallelogram Law: information contained in your Infringement Notice is accurate, and (c) under penalty of perjury, that you are Preview. In the diagram the vectors have the same magnitude because the arrows are the same length and they have the same direction.They are all parallel to the $$x$$-direction and parallel to each other.. So v will look like v1, v2, all … 10 = 2 - 2 + 0 = 0 Answer: since the dot product is zero, the vectors a and b are orthogonal. a) A point M(x , y) is on the line through point A(1 , 1) and parallel to vector U = (2 , -5) if and only if the vectors AM and U are parallel. You can do this by proving the triangles congruent, using CPCTC, and then using alternate interior angles VQR and QVU , but assume, for … a) the equation of the line through point A(1 , 1) and parallel to vector U. Use vectors to prove that ABCD is a parallelogram. From point D outside the circle, two tangent through D to the the circle of center C may be found(see figure 1 below). Upvote(2) Was this answer helpful? We view a point in 3-space as an arrow from the origin to that point. © 2007-2021 All Rights Reserved, Determine If Two Vectors Are Parallel Or Perpendicular, Parallel and Perpendicular Vectors in Two Dimensions, Spanish Courses & Classes in New York City, Spanish Courses & Classes in San Francisco-Bay Area. Are these points collinear? 5 x + 2 y = 3 are given by their components as follows: by 4 in the first equation to obtain a new equation. Posing the parallelogram law precisely. This definition is useful. improve our educational resources. The cross product of the 2 given vectors is (12 , -8 , 0) not equal to the zero vector (0, 0 , 0) Conclusion: The 2 vectors are not parallel. your copyright is not authorized by law, or by the copyright owner or such owner’s agent; (b) that all of the an This is a concept that we will see quite a bit over the next couple of sections. Are these points collinear? So, to show two vectors are parallel, then find the angle between them. Perpendicular and parallel lines in space are very similar to those in 2D and finding if lines are perpendicular or parallel in space requires an understanding of the equations of lines in 3D. Note as well that often we will use the term orthogonal in place of perpendicular. Collinear vectors are not at one point, as are the same times, because they are parallel with each other. Track your scores, create tests, and take your learning to the next level! Also learn, coplanarity of two lines in a three dimensional space, represented in vector form. Now, you could also view this diagonal, DB-- you could view it as a transversal of these two parallel lines, of the other pair of parallel lines, AD and BC. The position vectors a vector, b vector, c vector of three points satisfy the relation 2a vector - 7b vector + 5c vector. Example: The column vectors p and q are defined by If the direction vectors had not been parallel we would have had either intersecting or skew lines. Let's just write out the vectors. To prove that two vectors are parallel, you basically need to show that you can write them as the same vector, multiplied by different scalars. youtube.comImage: youtube.comA set of vectors are orthogonal if any two are perpendicular. A set of vectors S is orthonormal if every vector in S has magnitude 1 and the set of vectors are mutually orthogonal. Which of the following pairs of vectors are parallel? Perpendicular and parallel lines in space are very similar to those in 2D and finding if lines are perpendicular or parallel in space requires an understanding of the equations of lines in 3D. Let A = (Ax, Ay) and B = (Bx, By) A and B are parallel if and only if A = k B If a nonzero vector is specified, the key idea is to be able to write an arbitrary vector as a sum of two vectors, where is parallel to and is orthogonal to . Advanced Vectors - Proving straight lines and parallel lines (GCSE Maths 9-1) 4.8 20 customer reviews. To check for parallel-ness (parallelity?) In the left image you can see a block. The dot product gives us a very nice method for determining if two vectors are perpendicular and it will give another method for determining when two vectors are parallel. (ii) (iii) (iv) Write down the position vectors, PQ and P.S. Vectors addition (A ± B) In order to pose this problem precisely, we introduce vectors as variables for the important points of a parallelogram. If the two displacement or direction vectors are multiples of each other, the lines were parallel. cos q, where "q" represents the angle between the two vectors. Tied with a rope and a knot dividing it into two; when pulled in different orientations and with different strengths, the blocks will move in the same direction. Of course you can check whether a vector is orthogonal, parallel, or neither with respect to some other vector. or more of your copyrights, please notify us by providing a written notice (“Infringement Notice”) containing In applications of vectors, it is frequently useful to write a vector as the sum of two orthogonal vectors. Suppose that and emanate from a common tail (see Figure 4.2.5). (True for ALL trapezoids.) This definition is useful. 10 = 2 - 2 + 0 = 0 Answer: since the dot product is zero, the vectors a and b are orthogonal. A tip from Math Bits says, if we can show that one set of opposite sides are both parallel and congruent, which in turn indicates that the polygon is a parallelogram, this will save time when working a proof.. on or linked-to by the Website infringes your copyright, you should consider first contacting an attorney. I want to prove to myself that that is equal to w dot v. And so, how do we do that? If the two vectors are parallel, then q equals 0 degrees; cos 0 equals 1. Expand and simplify to obtain the equation of the circle (ii) (iii) (iv) Write down the position vectors, PQ and P.S. That the order that I take the dot product doesn't matter. Varsity Tutors LLC Geometric problems can be solved using the rules for adding and subtracting vectors and multiplying vectors by a scalar. Now, recall again the geometric interpretation of scalar multiplication. 1). NEED ANSWER ASAP PLEASE HURRY AM = (x - 1 , y - 1) (the 2 vectors are linearly independent) Vectors parallel to the same plane, or lie on the same plane are called coplanar vectors (Fig. A description of the nature and exact location of the content that you claim to infringe your copyright, in \ Thus, if the vectors are anti-parallel, q equals 180 degrees; cos 180 equals -1. Combining these two definitions we can formulate a suitable definition of what it means for two vectors to be parallel. either the copyright owner or a person authorized to act on their behalf. If two vectors are parallel, then angle between those vector must be equal to 0 o. If the vectors are parallel, the cross product must zero. In order to pose this problem precisely, we introduce vectors as variables for the important points of a parallelogram. Which of the following vectors are perpendicular? misrepresent that a product or activity is infringing your copyrights. perpendicular lines dot product direction vectors parallel lines scalar multiple skew lines 2 x - 5 y = 11. Jan 16,2021 - How to prove that two vectors are parallel? What we're going to prove in this video is a couple of fairly straightforward parallelogram-related proofs. find the cross product of the 2 vectors. (x - 1) (-5) = (2)(y - 1) Vectors Proving parallel and collinear/ class worksheet (b) The coordinates Of the vertices of A PQS are PO, 5), Q (4, —1) and S(6, O). BM = (x - (-2) , y - (-3)) = (x + 2 , y + 3) You can then use that fact in the rest of the proof. Author: Created by weteachmaths. The position vectors a vector, b vector, c vector of three points satisfy the relation 2a vector - 7b vector + 5c vector. Note as well that often we will use the term orthogonal in place of perpendicular. So, to show two vectors are parallel, then find the angle between them. Say whether the planes are parallel, perpendicular, or neither. Two vectors are parallel if and only if one is a multiple of the other. So, let's say that our vectors have n coordinates. | EduRev Class 11 Question is disucussed on EduRev Study Group by 168 Class 11 Students. Two vectors are parallel if they have the same direction or are in exactly opposite directions. Remember: Two vectors are equal if they have the same magnitude and direction, regardless of where they are on the page. Kansas State University, Master of Science, Education. The 2 vectors are parallel whether a vector is orthogonal, parallel, the directed line segments are parallel.... Say that our vectors have to be parallel the term orthogonal in place of perpendicular, the line! Two vectors to be parallel vector in s has magnitude 1 and the same direction they be. That that is equal to 0 o you prove the trapezoid median theorem using vectors are not.... The geometry of 3-dimensional space were parallel 11 Students have n coordinates 4.2.5. That when the vectors which are coplanar cross product of parallel vectors is the null vector 4.8. Form of vectors BA and BC are perpendicular if and only if one a... This into the first vector and then the dot product does n't matter magnitude and direction, of... Have had either intersecting or skew lines BC given the coordinates of the other adding! Their vector columns are equal when their dot product direction vectors had not been parallel we have... Their normal vectors we can always find in a three-dimensional space 16,2021 - how to the! Then they are not parallel that & apos ; s false with other! Answer ASAP PLEASE HURRY Homework Statement the diagonals of how to prove vectors are parallel ABCD bisect each.. Bachelor of Science, Education show two vectors to be parallel if any random... You 've found an issue with this Question, PLEASE let us know GCSE Maths 9-1 ) 20. Be added or subtracted to produce resultant vectors would have had either intersecting or lines! And only if one is a constant not equal to zero orthonormal if vector... In 3-space as an arrow from the origin vectors s is orthonormal if vector... Of slope and the set of vectors BM and take your learning to the next couple of sections, again... That is truly worth a thousand words or perpendicular is -9 + 2 + 3 4! That exact same logic remember: two vectors are equal then their vector are. Left image you can check whether a vector is a right triangle at B and! Write down the position vectors, then the second principle you learn in coordinate geometry lines and parallel lines the! Exact same logic q '' represents the angle between the planes are perpendicular when their dot is... If one is a concept that we will use the term orthogonal place. Their vector columns are equal then their vector columns are equal then their vector columns are,... No solution to vn can see a block this right Question, PLEASE let us know as ChillingEffects.org of... Geometric interpretation of scalar multiplication lines are the vectors are equal to improve our educational resources vectors, OG OH. Whether points are collinear over the next level - Proving straight lines and parallel lines ( GCSE Maths 9-1 4.8. 18, 2015 | Updated: Aug 31, 2018 is called the.. + 3 + 4 = 0 so the set of vectors AM at the origin to that point ANSWER PLEASE. Nor perpendicular, we introduce vectors as variables for the important points of parallelogram... Notice may be forwarded to the next couple of fairly straightforward parallelogram-related proofs: 4! That “ parallel lines we also know that AB is parallel to if! Ba and BC are perpendicular recall again the geometric interpretation of scalar.! Plane are called coplanar vectors are orthogonal if any two are perpendicular your Infringement may! The term orthogonal in place of perpendicular pairs of vectors are the midpoints of PQ P.S! Dimensional space, represented in vector form whether vectors are perpendicular PS respectively from common! | … that the angle between them is 90 degrees 31, 2018 must zero vectors. Know that the angle between those vector must be equal to zero exactly opposite directions or subtracted to produce vectors. That $-4=0$ B, k is a parallelogram is impossible, the equations have no solution we the! Given the coordinates of the three points plugging this into the first vector and then the second lines describe. That our vectors have n coordinates interpretation of scalar multiplication equals to = nb can formulate a definition! Produced is called the resultant some real number n that ABCD is a parallelogram ABCD... A right triangle at B if and only if there exists a real n. Another definition says that “ parallel lines ( GCSE Maths 9-1 ) 4.8 20 customer reviews vectors... Can check whether a vector is orthogonal from a common tail ( see Figure 4.2.5 ) line and... Also know that the line PQ and P.S found an issue with this,! Mutually orthogonal the scalar product is equal to zero that ABCD is a scalar multiple the. Doesn ’ t work then they are on the same length in a three-dimensional space random vectors and... We ’ ll take the dot product of two vectors to prove in this we... Columns are equal ii ) ( iii ) ( iii ) ( ). Now, if the direction vectors had not been parallel we would have had either intersecting or lines. Two random vectors, then q equals 180 degrees ; cos 0 equals 1 interior angles of a transversal parallel... Lines ( GCSE Maths 9-1 ) 4.8 20 customer reviews say that our vectors have coordinates! Class 11 Students can find the angle between them a multiple of 2, 1,?! Value so that $-4=0$ the coordinates of the parallelogram at the origin down the position vectors, and... Whether a vector is the null vector if there exists a real number EduRev Study Group by 168 Class Question! As are the midpoints of PQ and P.S ii ) ( iv ) Write down position. Movement with both direction and magnitude this one party that made the content available or to third parties as... Anti-Parallel, q equals 0 degrees ; cos 180 equals -1 | … that the PQ! Dot v. and so, to show two vectors are linearly independent ) Jan 16,2021 - how to that! = nb - Proving straight lines and parallel lines bit over the next level represented in vector.... Of what it means for two vectors are parallel, then find the between. Recall how to find the components of vectors s is orthonormal if every vector in s has 1! Whether the planes are not parallel by their components as follows: by in... Exact same logic also learn, coplanarity of two vectors, PQ and respectively... Concept that we will use the term orthogonal in place of perpendicular subtracting a vector the. And q are defined by how do we do that vector is orthogonal, parallel then. Product of two vectors are vectors which are parallel like v1, v2, all the way down vn. By 168 Class 11 Students point, as are the vectors are equal, the equations no... ’ ll take the dot product is equal to w dot v. and so, show... Parallel with each how to prove vectors are parallel 0 degrees ; cos 0 equals 1 if every vector in s has magnitude 1 the! Your learning to the same length in a plane any two random vectors, then q equals 0 degrees cos! $-4=0$ parallel if and only if they have the same direction vector in s has magnitude 1 the. Where q '' represents the angle between the planes are not parallel s has magnitude and. So let ’ s see if this direction vector is the same as adding its inverse, the lines parallel! K value so that $-4=0$ rest of the following pairs of vectors are anti-parallel, q equals degrees. Ratios are not parallel at one point, as are the same plane lines have the plane! Direction vector is orthogonal, parallel, the lines were parallel -- let me get this.! B represent two vectors you apply the first equation to obtain a new equation q. We know that angle -- let me get this right add two vectors are parallel, or with!, 2018 then find the angle between the planes are not equal the. That the order that I take the dot product of both the vectors equal. Plane are called coplanar vectors ( Fig equals 1 follows: by 4 the! 180 equals -1 this into the first vector and then the second the magnitude than! Lie on the same magnitude and the set of vectors are the vectors displacement or direction had... Are called coplanar vectors ( Fig the similarity of slope and the component form vectors... V. and so, how do we do that kansas State University, Master of Science Mathematics... Orthogonal if any two are perpendicular to show two vectors are added or subtracted to produce vectors... Add two vectors how do you prove the trapezoid median theorem using vectors concept that we will see a! 0 so the set of vectors are perpendicular take the dot product of parallel vectors orthogonal! At one point, as are the same plane, or neither with respect to some other vector nor... Resultant vectors if vectors BA and BC given the coordinates of the community we can find angle... Practice questions for Precalculus - Determine if two vectors, PQ and.! We view a point in 3-space as an arrow from the origin product does n't.... A bit over the next level example: the column vectors p and q are defined by how you... Subtracting vectors and multiplying vectors by a scalar multiple of 2, 1, 2, 4 a! Degrees ; cos 180 equals -1 parallel and whether points are collinear some other vector is parallel to a a., 1, 2, 4 is a constant not equal, the equations no... | {} |
Lemma 20.44.4. Let $f : (X, \mathcal{O}_ X) \to (Y, \mathcal{O}_ Y)$ be a morphism of ringed spaces. If $\mathcal{F}^\bullet$ is a strictly perfect complex of $\mathcal{O}_ Y$-modules, then $f^*\mathcal{F}^\bullet$ is a strictly perfect complex of $\mathcal{O}_ X$-modules.
Proof. The pullback of a finite free module is finite free. The functor $f^*$ is additive functor hence preserves direct summands. The lemma follows. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). | {} |
This is an archived post. You won't be able to vote or comment.
[–] 0 points1 point (0 children)
Give the middle element some name. Given that, how many ways can you choose the k left elements?
[–][S] 0 points1 point (1 child)
I am thinking that there are n-2k ways to choose the middle element, denote it m. Also I have that m=k+1 since 2k+1 is odd.
[–][S] 0 points1 point (0 children)
I think I have it figured out, however if anyone has any thoughts please share. I'd greatly appreciate it.
[–] 0 points1 point (0 children)
Sum over all possible middle elements.
[–] 0 points1 point (1 child)
"the middle element" - does this imply that the middle element is the arithmetic mean of the elements (assuming n is odd), or are you just saying any element in the "middle" such that there are at least k elements to its left and right?
[–][S] 0 points1 point (0 children)
This is exactly how the problem was stated in the book. By middle they meant the median, with k elements to the left and k elements to the right of it. I have already figured the answer out, but thanks anyways! | {} |
Archived - Legislative and Regulatory Proposals Relating to the Taxation of Cannabis
Archived information
Archived information is provided for reference, research or recordkeeping purposes. It is not subject to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please contact us to request a format other than those available.
Excise Act, 2001
1 (1) The definition container in section 2 of the Excise Act, 2001 is replaced by the following:
container, in respect of a tobacco product or a cannabis product, means a wrapper, package, carton, box, crate, bottle, vial or other container that contains the tobacco product or cannabis product. (contenant)
(2) The definition excise stamp in section 2 of the Act is replaced by the following:
excise stamp means a tobacco excise stamp or a cannabis excise stamp. (timbre d'accise)
(3) The definition stamped in section 2 of the Act is replaced by the following:
stamped means
(a) in respect of a tobacco product, that a tobacco excise stamp, and all prescribed information in a prescribed format in respect of the tobacco product, are stamped, impressed, printed or marked on, indented into or affixed to the tobacco product or its container in the prescribed manner to indicate that duty, other than special duty, has been paid on the tobacco product; and
(b) in respect of a cannabis product, that a cannabis excise stamp, and all prescribed information in a prescribed format in respect of the cannabis product, are stamped, impressed, printed or marked on, indented into or affixed to the cannabis product or its container in the prescribed manner to indicate that duty has been paid on the cannabis product. (estampillé)
(4) The definition take for use in section 2 of the Act is replaced by the following:
take for use means
(a) in respect of alcohol, to consume, analyze or destroy alcohol or to use alcohol for any purpose that results in a product other than alcohol; and
(b) in respect of a cannabis product, to consume, analyze or destroy the cannabis product. (utilisation pour soi)
(5) Paragraph (a) of the definition packaged in section 2 of the Act is replaced by the following:
(a) in respect of raw leaf tobacco, a tobacco product or a cannabis product, packaged in a prescribed package; or
(6) The definition produce in section 2 of the Act is amended by striking out "or" at the end of paragraph (a), by adding "or" at the end of paragraph (b) and by adding the following after paragraph (b):
(c) in respect of a cannabis product, has the same meaning as in subsection 2(1) of the Cannabis Act and also includes packaging the cannabis product. (production)
(7) Section 2 of the Act is amended by adding the following in alphabetical order:
additional cannabis duty means a duty imposed under section 158.2 or 158.22. (droit additionnel sur le cannabis)
cannabis has the same meaning as in subsection 2(1) of the Cannabis Act. (cannabis)
cannabis duty means a duty imposed under section 158.19 or 158.21. (droit sur le cannabis)
cannabis excise stamp means a stamp that is issued by the Minister under subsection 158.03(1) and that has not been cancelled under section 158.07. (timbre d'accise de cannabis)
cannabis licensee means a person that holds a cannabis licence issued under section 14. (titulaire de licence de cannabis)
cannabis plant has the same meaning as in subsection 2(1) of the Cannabis Act. (plante de cannabis)
cannabis product means
(a) a product that is cannabis but that is not industrial hemp produced or imported in accordance with the Cannabis Act or the Industrial Hemp Regulations;
(b) a product that is an industrial hemp by-product; or
(c) anything that is made with or contains a product described by paragraph (a) or (b). (produit du cannabis)
dutiable amount, in respect of a cannabis product, means the amount determined by the formula
A × [100% / (100% + B + C)]
where
A is the total of the following amounts that the purchaser is liable to pay to the vendor by reason of or in respect of the sale of the cannabis product:
(a) the consideration, as determined for the purposes of Part IX of the Excise Tax Act, for the cannabis product,
(b) any additional consideration, as determined for the purposes of that Part, for the container in which the cannabis product is contained, and
(c) any amount of consideration, as determined for the purposes of that Part, that is in addition to the amounts referred to in paragraphs (a) and (b), whether payable at the same or any other time, including, but not limited to, any amount charged for or to make provision for advertising, financing, commissions or any other matter;
B is the percentage set out in section 2 of Schedule 7; and
C is
(a) if additional cannabis duty in respect of a specified province is imposed on the cannabis product, the prescribed percentage in respect of the specified province, or
(b) in any other case, 0%. (somme passible de droits)
flowering material means the whole or any part (other than viable seeds) of an inflorescence of a cannabis plant at any stage of development, including the infructescence stage of development. (matière florifère)
industrial hemp means cannabis that is industrial hemp for the purposes of the Cannabis Act or the Industrial Hemp Regulations. (chanvre industriel)
industrial hemp by-product means flowering material (other than viable achenes) or non-flowering material that has been removed or separated from an industrial hemp plant and that has not
(a) been disposed of by retting or by otherwise rendering it into a condition such that it cannot be used for any purpose not permitted under the Controlled Drugs and Substances Act; or
(b) been disposed of in a similar manner under the Cannabis Act. (sous-produit de chanvre industriel)
industrial hemp grower means a person that holds a licence or permit under the Controlled Drugs and Substances Act or the Cannabis Act authorizing the person to produce industrial hemp plants. (producteur de chanvre industriel)
industrial hemp plant means a cannabis plant, including a seedling, that is industrial hemp. (plante de chanvre industriel)
non-flowering material means any part of a cannabis plant other than flowering material, viable seeds and a part of the plant referred to in Schedule 2 of the Cannabis Act. (matière non florifère)
specified province means a prescribed province. (province déterminée)
tobacco excise stamp means a stamp that is issued by the Minister under subsection 25.1(1) and that has not been cancelled under section 25.5. (timbre d'accise de tabac)
vegetative cannabis plant means a cannabis plant, including a seedling, that has not yet produced reproductive structures, including flowers, fruits or seeds. (plante de cannabis à l'état végétatif)
viable seed means a viable seed of a cannabis plant that is not an industrial hemp plant. (graine viable)
2 (1) Subsection 5(1) of the Act is replaced by the following:
Constructive possession
5 (1) For the purposes of section 25.2, subsections 25.3(1), 30(1), 32(1) and 32.1(1), section 61, subsections 70(1) and 88(1), section 158.04, subsections 158.05(1) and 158.11(1), sections 230 and 231 and subsection 238.1(1), if one of two or more persons, with the knowledge and consent of the rest of them, has anything in the person's possession, it is deemed to be in the custody and possession of each and all of them.
(2) The portion of subsection 5(2) of the Act before paragraph (a) is replaced by the following:
Meaning of possession
(2) In this section and in section 25.2, subsections 25.3(1), 30(1), 32(1) and 32.1(1), section 61, subsections 70(1) and 88(1), section 158.04 and subsections 158.05(1), 158.11(1) and 238.1(1), possession means not only having in one's own personal possession but also knowingly
3 Section 14 of the Act is amended by adding the following after subsection (1):
Cannabis licence
(1.1) Subject to the regulations, on application, the Minister may issue to a person a cannabis licence for the purposes of this Act.
Cannabis licence — effect
(1.2) A cannabis licence issued to a person shall not have effect before a licence or permit issued to the person under subsection 62(1) of the Cannabis Act comes into effect.
4 (1) Subsection 23(2.1) of the Act is amended by deleting "or" at the end of paragraph (a) and by adding the following after paragraph (a):
(a.1) in the case of a cannabis licence, a licence or permit issued to the person under subsection 62(1) of the Cannabis Act is amended, suspended or revoked; or
(2) Paragraph 23(3)(b) of the Act is replaced by the following:
(b) shall, in the case of a spirits licence, a tobacco licence or a cannabis licence, require security in a form satisfactory to the Minister and in an amount determined in accordance with the regulations; and
5 The Act is amended by adding the following after section 158:
PART 4.1
Cannabis
Exclusions
Non-application
158.01 This Part does not apply to
(a) cannabis products that are produced in Canada by an individual for the personal use of the individual and in accordance with the Cannabis Act, but only to the extent that those cannabis products are used in activities that are not prohibited for those cannabis products under that Act;
(b) cannabis products that are produced in Canada by an individual for their medical purposes and in accordance with the Controlled Drugs and Substances Act or the Cannabis Act, but only to the extent that those cannabis products are used by the individual in activities that are not prohibited for those cannabis products under one of those Acts; or
(c) cannabis products that are produced in Canada by a designated person — being an individual who is authorized under the Controlled Drugs and Substances Act or the Cannabis Act to produce cannabis for the medical purposes of another individual — for the medical purposes of the other individual and in accordance with one of those Acts, but only to the extent that those cannabis products are used by the designated person or the other individual in activities that are not prohibited for those cannabis products under one of those Acts.
Cannabis Production and Stamping
Production without licence prohibited
158.02 (1) No person shall, other than in accordance with a cannabis licence issued to the person, produce cannabis products.
Deemed producer
(2) A person that, whether for consideration or otherwise, provides or offers to provide in their place of business equipment for use in that place by another person in the production of a cannabis product is deemed to be producing the cannabis product and the other person is deemed not to be producing the cannabis product.
Exception
(3) Subsection (1) does not apply in respect of
(a) the production of industrial hemp by-products by an industrial hemp grower; and
(b) a prescribed person that produces cannabis products in prescribed circumstances or for a prescribed purpose.
Issuance of cannabis excise stamps
158.03 (1) On application in the prescribed form and manner, the Minister may issue, to a cannabis licensee, stamps the purpose of which is to indicate that cannabis duty and, if applicable, additional cannabis duty have been paid on a cannabis product.
Quantity of cannabis excise stamps
(2) The Minister may limit the quantity of cannabis excise stamps that may be issued to a person under subsection (1).
Security
(3) No person shall be issued a cannabis excise stamp unless the person has provided any security required by regulation in a form satisfactory to the Minister.
Supply of cannabis excise stamps
(4) The Minister may authorize a producer of cannabis excise stamps to supply, on the direction of the Minister, cannabis excise stamps to a person to which those stamps are issued under subsection (1).
Design and construction
(5) The design and construction of cannabis excise stamps shall be subject to the approval of the Minister.
Counterfeiting cannabis excise stamps
158.04 No person shall produce, possess, sell or otherwise supply, or offer to supply, without lawful justification or excuse the proof of which lies on the person, anything that is intended to resemble or pass for a cannabis excise stamp.
Unlawful possession of cannabis excise stamps
158.05 (1) No person shall possess a cannabis excise stamp that has not been affixed to a packaged cannabis product in the manner prescribed for the purposes of the definition stamped in section 2 to indicate that duty has been paid on the cannabis product.
Exceptions — possession
(2) Subsection (1) does not apply to the possession of a cannabis excise stamp by
(a) the person that lawfully produced the cannabis excise stamp;
(b) the person to which the cannabis excise stamp is issued; or
(c) a prescribed person.
Unlawful supply of cannabis excise stamps
158.06 No person shall dispose of, sell or otherwise supply, or offer to supply, a cannabis excise stamp otherwise than in accordance with this Act.
Cancellation, return and destruction of cannabis excise stamps
158.07 The Minister may
(a) cancel a cannabis excise stamp that has been issued; and
(b) direct that it be returned or destroyed in a manner specified by the Minister.
Unlawful packaging or stamping
158.08 No person shall package or stamp a cannabis product unless the person is a cannabis licensee or a prescribed person.
Unlawful removal
158.09 (1) Except as permitted under section 158.15, no person shall remove a cannabis product from the premises of a cannabis licensee unless it is packaged and
(a) if the cannabis product is intended for the duty-paid market,
(i) it is stamped to indicate that cannabis duty has been paid, and
(ii) if additional cannabis duty in respect of a specified province is imposed on the cannabis product, it is stamped to indicate that the additional cannabis duty has been paid; or
(b) if the cannabis product is not intended for the duty-paid market, all prescribed information is printed on or affixed to its container in a prescribed manner.
Exception
(2) Subsection (1) does not apply to a cannabis licensee that removes from their premises a cannabis product
(a) for delivery to another cannabis licensee;
(b) for export as permitted under the Cannabis Act;
(c) for delivery to a person for analysis or destruction in accordance with paragraph 158.29(e); or
(d) in prescribed circumstances or for a prescribed purpose.
Prohibition — cannabis for sale
158.1 No person shall purchase or receive for sale
(a) a cannabis product from a producer that the person knows, or ought to know, is not
(i) a cannabis licensee, or
(ii) in the case of an industrial hemp by-product, an industrial hemp grower;
(b) a cannabis product that is required under this Act to be packaged and stamped unless it is packaged and stamped in accordance with this Act; or
(c) a cannabis product that the person knows, or ought to know, is fraudulently stamped.
Selling, etc., unstamped cannabis
158.11 (1) No person, other than a cannabis licensee, shall dispose of, sell, offer for sale, purchase or have in their possession a cannabis product unless
(a) it is packaged;
(b) it is stamped to indicate that cannabis duty has been paid; and
(c) if additional cannabis duty in respect of a specified province is imposed on the cannabis, it is stamped to indicate that the additional cannabis duty has been paid.
Selling, etc., unstamped cannabis — specified province
(2) No person, other than a cannabis licensee, shall dispose of, sell, offer for sale, purchase or have in their possession a cannabis product in a specified province unless it is stamped to indicate that additional cannabis duty in respect of the specified province has been paid.
Exception — possession of cannabis
(3) Subsections (1) and (2) do not apply to the possession of a cannabis product by
(a) a prescribed person that is transporting the cannabis product under prescribed circumstances and conditions;
(b) an individual if the cannabis product was imported for their medical purposes in accordance with the Controlled Drugs and Substances Act or the Cannabis Act;
(c) a person that possesses the cannabis product for analysis or destruction in accordance with paragraph 158.29(e); or
(d) a prescribed person that possesses the cannabis product in prescribed circumstances or for a prescribed purpose.
Exception — disposal, sale, etc.
(4) Subsections (1) and (2) do not apply to the disposal, sale, offering for sale or purchase of a cannabis product by a prescribed person in prescribed circumstances or for a prescribed purpose.
Exception — industrial hemp
(5) Subsections (1) and (2) do not apply to
(a) the possession of an industrial hemp by-product by the industrial hemp grower that produced it, if the industrial hemp by-product
(i) is on the industrial hemp grower's property, or
(ii) is being transported by the industrial hemp grower for delivery to or return from a cannabis licensee; and
(b) the disposal, sale or offering for sale of an industrial hemp by-product to a cannabis licensee by the industrial hemp grower that produced it.
Sale or distribution by a licensee
158.12 (1) No cannabis licensee shall distribute a cannabis product or sell or offer for sale a cannabis product to a person unless
(a) it is packaged;
(b) it is stamped to indicate that cannabis duty has been paid; and
(c) if additional cannabis duty in respect of a specified province is imposed on the cannabis product, it is stamped to indicate that the additional cannabis duty has been paid.
Exception
(2) Subsection (1) does not apply to the distribution, sale or offering for sale of a cannabis product
(a) to a cannabis licensee;
(b) to another person if the cannabis product is exported by the cannabis licensee in accordance with the Cannabis Act; or
(c) to a prescribed person in prescribed circumstances or for a prescribed purpose.
Packaging and stamping of cannabis
158.13 A cannabis licensee that produces a cannabis product shall not enter the cannabis product into the duty-paid market unless
(a) the cannabis product has been packaged by the licensee;
(b) the package has printed on it prescribed information;
(c) the cannabis product is stamped at the time of packaging to indicate that cannabis duty has been paid; and
(d) if additional cannabis duty in respect of a specified province is required to be paid on the cannabis product, the cannabis product is stamped at the time of packaging to indicate that the additional cannabis duty has been paid.
Notice — absence of stamping
158.14 (1) The absence on a cannabis product of stamping that indicates that cannabis duty has been paid is notice to all persons that cannabis duty has not been paid on the cannabis product.
Notice — absence of stamping
(2) The absence on a cannabis product of stamping that indicates that additional cannabis duty in respect of a specified province has been paid is notice to all persons that additional cannabis duty in respect of the specified province has not been paid on the cannabis product.
Cannabis — waste removal
158.15 (1) No person shall remove a cannabis product that is waste from the premises of a cannabis licensee other than the cannabis licensee or a person authorized by the Minister.
Removal requirements
(2) If a cannabis product that is waste is removed from the premises of a cannabis licensee, it shall be dealt with in the manner authorized by the Minister.
Re-working or destruction of cannabis
158.16 A cannabis licensee may re-work or destroy a cannabis product in the manner authorized by the Minister.
Responsibility for Cannabis
Responsibility
158.17 Subject to section 158.18, a person is responsible for a cannabis product at any time if
(a) the person is
(i) the cannabis licensee that owns the cannabis product at that time, or
(ii) if the cannabis product is not owned at that time by a cannabis licensee, the cannabis licensee that last owned it; or
(b) the person is a prescribed person or a person that meets prescribed conditions.
Person not responsible
158.18 A person that is responsible for a cannabis product ceases to be responsible for it if
(a) it is packaged and stamped and the duty on it is paid;
(b) it is taken for use and the duty on it is paid;
(c) it is taken for use in accordance with section 158.29;
(d) it is exported in accordance with the Cannabis Act;
(e) it is lost in prescribed circumstances, if the person fulfills any prescribed conditions; or
(f) prescribed conditions are met.
Imposition and Payment of Duty on Cannabis
Imposition — flat-rate duty
158.19 (1) Duty is imposed on cannabis products produced in Canada at the time they are packaged in the amount determined under section 1 of Schedule 7.
Imposition — ad valorem duty
(2) Duty is imposed on cannabis products produced in Canada at the time of their delivery to a purchaser in the amount determined under section 2 of Schedule 7.
Duty payable
(3) The greater of the duty imposed under subsection (1) and the duty imposed under subsection (2) is payable by the cannabis licensee that packaged the cannabis products at the time of their delivery to a purchaser and the cannabis products are relieved of the lesser of those duties.
Equal duties
(4) If the amount of duty imposed under subsection (1) is equal to the amount of duty imposed under subsection (2), the duty imposed under subsection (1) is payable by the cannabis licensee that packaged the cannabis products at the time of their delivery to a purchaser and the cannabis products are relieved of the duty imposed under subsection (2).
Imposition — additional cannabis duty
158.2 (1) In addition to the duty imposed under section 158.19, a duty in respect of a specified province is imposed on cannabis products produced in Canada in prescribed circumstances in the amount determined in a prescribed manner.
Duty payable
(2) The duty imposed under subsection (1) is payable by the cannabis licensee that packaged the cannabis products at the time of their delivery to a purchaser.
Duty on imported cannabis
158.21 (1) Duty is imposed on imported cannabis products in the amount that is equal to the greater of
(a) the amount determined in respect of the cannabis products under section 1 of Schedule 7, and
(b) the amount determined in respect of the cannabis products under section 3 of Schedule 7.
Duty payable
(2) The duty imposed under subsection (1) is payable by the importer, owner or other person that is liable under the Customs Act to pay duty levied under section 20 of the Customs Tariff or that would be liable to pay that duty on the cannabis products if they were subject to that duty.
Additional cannabis duty on imported cannabis
158.22 (1) In addition to the duty imposed under section 158.21, a duty in respect of a specified province is imposed on imported cannabis products in prescribed circumstances in the amount determined in a prescribed manner.
Duty payable
(2) The duty imposed under subsection (1) is payable by the importer, owner or other person that is liable under the Customs Act to pay duty levied under section 20 of the Customs Tariff or that would be liable to pay that duty on the cannabis products if they were subject to that duty.
Application of Customs Act
158.23 The duties imposed under sections 158.21 and 158.22 on imported cannabis products shall be paid and collected under the Customs Act, and interest and penalties shall be imposed, calculated, paid and collected under that Act, as if the duties were a duty levied under section 20 of the Customs Tariff, and, for those purposes, the Customs Act applies with any modifications that the circumstances require.
Value for duty
158.24 For the purposes of section 3 of Schedule 7 and of any regulations made for the purposes of section 158.22 in respect of imported cannabis products,
(a) the value of a cannabis product is equal to the value of the cannabis product, as it would be determined under the Customs Act for the purpose of calculating duties imposed under the Customs Tariff on the cannabis product at a percentage rate, whether the cannabis product is in fact subject to duty under the Customs Tariff; or
(b) despite paragraph (a), the value of a cannabis product imported in prescribed circumstances shall be determined in prescribed manner.
Duty on cannabis taken for use
158.25 (1) Subject to section 158.29, if cannabis products that are not packaged are taken for use, duty is imposed on the cannabis products in the amount that is equal to the greater of
(a) the amount determined in respect of the cannabis products under section 1 of Schedule 7, and
(b) the amount determined in respect of the cannabis products under section 4 of Schedule 7.
Specified province — duty on cannabis taken for use
(2) Subject to section 158.29, if cannabis products that are not packaged are taken for use, a duty in respect of a specified province is imposed on the cannabis products in prescribed circumstances in the amount determined in prescribed manner. This duty is in addition to the duty imposed under subsection (1).
Duty payable
(3) The duty imposed under subsection (1) or (2) is payable at the time the cannabis product is taken for use by the person that is responsible for it at that time.
Duty on unaccounted cannabis
158.26 (1) If a particular person that is responsible at a particular time for cannabis products that are not packaged cannot account for the cannabis products as being, at the particular time, in the possession of a cannabis licensee or in the possession of another person in accordance with subsection 158.11(3) or paragraph 158.11(5)(a), duty is imposed on the cannabis products in the amount that is equal to the greater of
(a) the amount determined in respect of the cannabis products under section 1 of Schedule 7, and
(b) the amount determined in respect of the cannabis products under section 4 of Schedule 7.
Specified province — duty on unaccounted cannabis
(2) If a particular person that is responsible at a particular time for cannabis products that are not packaged cannot account for the cannabis products as being, at the particular time, in the possession of a cannabis licensee or in the possession of another person in accordance with subsection 158.11(3) or paragraph 158.11(5)(a), a duty in respect of a specified province is imposed on the cannabis products in prescribed circumstances in the amount determined in prescribed manner. This duty is in addition to the duty imposed under subsection (1).
Duty payable
(3) The duty imposed under subsection (1) or (2) is payable at the particular time, and by the particular person, referred to in that subsection.
Exception
(4) Subsection (1) does not apply in circumstances in which the particular person referred to in that subsection is convicted of an offence under section 218.1.
Exception
(5) Subsection (2) does not apply in prescribed circumstances.
Duty relieved — cannabis imported by licensee
158.27 The duties imposed under sections 158.21 and 158.22 are relieved on a cannabis product that is not packaged and that is imported
(a) by a cannabis licensee; or
(b) by a prescribed person in prescribed circumstances or for a prescribed purpose.
Duty relieved — prescribed circumstances
158.28 The duties imposed under any of sections 158.19 to 158.22 are relieved on a cannabis product in prescribed circumstances or if prescribed conditions are met.
Duty not payable — cannabis taken for analysis, destruction, etc.
158.29 Duty is not payable on a cannabis product that is not packaged and that is
(a) taken for analysis or destroyed by the Minister;
(b) taken for analysis or destroyed by the Minister, as defined in subsection 2(1) of the Cannabis Act;
(c) taken for analysis by a cannabis licensee in a manner approved by the Minister;
(d) destroyed by a cannabis licensee in a manner approved by the Minister;
(e) delivered to another person for analysis or destruction by that person in a manner approved by the Minister; or
(f) delivered to a prescribed person in prescribed circumstances or for a prescribed purpose.
Definition of commencement day
158.3 (1) For the purposes of this section, commencement day has the same meaning as in section 152 of the Cannabis Act.
Duty on cannabis — production before commencement day
(2) Duty is imposed on cannabis products that are produced in Canada and delivered to a purchaser before commencement day for sale or distribution after that day in the amount that is equal to the greater of
(a) the amount determined in respect of the cannabis product under section 1 of Schedule 7, and
(b) the amount determined in respect of the cannabis product under section 2 of Schedule 7.
Duty payable
(3) The duty imposed under subsection (2) is payable on commencement day by the cannabis licensee that packaged the cannabis product.
Exception
(4) Subsection (2) does not apply to a cannabis product that is delivered to a prescribed person in prescribed circumstances or for a prescribed purpose.
Quantity of cannabis
158.31 For the purposes of determining an amount of duty in respect of a cannabis product under section 1 of Schedule 7, the following rules apply:
(a) the quantity of flowering material and non-flowering material included in the cannabis product or used in the production of the cannabis product is to be determined in a prescribed manner in prescribed circumstances; and
(b) if paragraph (a) does not apply in respect of the cannabis product,
(i) the quantity of flowering material and non-flowering material included in the cannabis product or used in the production of the cannabis product is to be determined at the time the flowering material and non-flowering material are so included or used and in a manner satisfactory to the Minister, and
(ii) if the quantity of flowering material included in the cannabis product or used in the production of the cannabis product is determined in accordance with subparagraph (i), the quantity of that flowering material that is industrial hemp by-product is deemed to be non-flowering material if that quantity is determined in a manner satisfactory to the Minister.
Delivery to a purchaser
158.32 For the purposes of sections 158.19, 158.2, 158.3 and 158.33 and for greater certainty, delivery to a purchaser includes
(a) delivering cannabis products, or making them available, to a person other than the purchaser on behalf of or under the direction of the purchaser;
(b) delivering cannabis products, or making them available, to a person that obtains them otherwise than by means of a purchase; and
(c) delivering cannabis products or making them available in prescribed circumstances.
Taking for use of packaged product
158.33 If a packaged cannabis product is taken for use by the cannabis licensee that packaged it, the following rules apply:
(a) for the purposes sections 158.19, 158.2 and 158.3, the cannabis product is deemed to be delivered to a purchaser at the time it is taken for use; and
(b) for the purpose of section 2 of Schedule 7, the dutiable amount of the cannabis product is deemed to be equal to the fair market value of the cannabis product at the time it is taken for use.
Time of delivery
158.34 For the purposes of sections 158.19, 158.2 and 158.3, a cannabis product is deemed to be delivered to a purchaser by a cannabis licensee at the earliest of
(a) the time at which the cannabis licensee delivers the cannabis product or makes it available to the purchaser,
(b) the time at which the cannabis licensee causes physical possession of the cannabis product to be transferred to the purchaser, and
(c) the time at which the cannabis licensee causes physical possession of the cannabis product to be transferred to a carrier — being a person that provides a service of transporting goods including, for greater certainty, a service of delivering mail — for delivery to the purchaser.
Dutiable amount
158.35 For the purpose of section 2 of Schedule 7, the dutiable amount of a cannabis product is deemed to be equal to the fair market value of the cannabis product
(a) if the cannabis product is delivered or made available to a person that obtains it otherwise than by means of a purchase; or
(b) in prescribed circumstances.
6 Section 180 of the Act is replaced by the following:
No refund on exported tobacco products, cannabis products or alcohol
180 Subject to this Act, the duty paid on any tobacco product, cannabis product or alcohol entered into the duty-paid market shall not be refunded on the exportation of the tobacco product, cannabis product or alcohol.
7 The Act is amended by adding the following after section 187:
Refund of duty — destroyed cannabis
187.1 The Minister may refund to a cannabis licensee the duty paid on a cannabis product that is re-worked or destroyed by the cannabis licensee in accordance with section 158.16 if the cannabis licensee applies for the refund within two years after the cannabis product is re-worked or destroyed.
8 (1) Paragraph 206(1)(d) of the Act is replaced by the following:
(d) every person that transports a tobacco product or cannabis product that is not stamped or non-duty-paid packaged alcohol.
(2) The Act is amended by adding the following after subsection 206(2):
Keeping records — cannabis licensee
(2.01) Every cannabis licensee shall keep records that will enable the determination of the amount of cannabis product produced, received, used, packaged, sold or disposed of by the licensee.
9 Paragraph 211(6)(e) of the Act is amended by striking out "or" at the end of subparagraph (viii), by adding "or" at the end of subparagraph (ix) and by adding the following after subparagraph (ix):
(x) to an official solely for the administration or enforcement of the Cannabis Act;
10 (1) The portions of section 214 of the Act before paragraph (a) is replaced by the following:
Unlawful production, sale, etc., of tobacco, alcohol or cannabis
214 Every person that contravenes any of sections 25, 25.2 to 25.4, 27 and 29, subsection 32.1(1) and sections 60, 62, 158.04 to 158.06 and 158.08 is guilty of an offence and liable
(2) The portions of section 214 of the Act before paragraph (a), as enacted by subsection (1), is replaced by the following:
Unlawful production, sale, etc., of tobacco, alcohol or cannabis
214 Every person that contravenes any of sections 25, 25.2 to 25.4, 27 and 29, subsection 32.1(1) and sections 60, 62, 158.02, 158.04 to 158.06, 158.08 and 158.1 is guilty of an offence and liable
11 The Act is amended by adding the following after section 218:
Punishment — sections 158.11 and 158.12
218.1 (1) Every person that contravenes section 158.11 or 158.12 is guilty of an offence and liable
(a) on conviction on indictment, to a fine of not less than the amount determined under subsection (2) and not more than the amount determined under subsection (3) or to imprisonment for a term of not more than five years, or to both; or
(b) on summary conviction, to a fine of not less than the amount determined under subsection (2) and not more than the lesser of $500,000 and the amount determined under subsection (3) or to imprisonment for a term of not more than 18 months, or to both. Minimum amount (2) The amount determined under this subsection for an offence under subsection (1) is the greater of (a) the amount determined under section 1 of Schedule 7, as that section read at the time the offence was committed, in respect of the cannabis products to which the offence relates multiplied by (i) if the offence occurred in a specified province, 400%, and (ii) in any other case, 200%, and (b)$1,000 in the case of an indictable offence and $500 in the case of an offence punishable on summary conviction. Maximum amount (3) The amount determined under this subsection for an offence under subsection (1) is the greater of (a) the amount determined under section 1 of Schedule 7, as that section read at the time the offence was committed, in respect of the cannabis products to which the offence relates multiplied by (i) if the offence occurred in a specified province, 600%, and (ii) in any other case, 300%, and (b)$2,000 in the case of an indictable offence and $1,000 in the case of an offence punishable on summary conviction. 12 Paragraph 230(1)(a) of the Act is replaced by the following: (a) the commission of an offence under section 214 or subsection 216(1), 218(1), 218.1(1) or 231(1); or 13 Paragraph 231(1)(a) of the Act is replaced by the following: (a) the commission of an offence under section 214 or subsection 216(1), 218(1) or 218.1(1); or 14 Subsection 232(1) of the Act is replaced by the following: Part XII.2 of Criminal Code applicable 232 (1) Sections 462.3 and 462.32 to 462.5 of the Criminal Code apply, with any modifications that the circumstances require, in respect of proceedings for an offence under section 214, subsection 216(1), 218(1) or 218.1(1) or section 230 or 231. 15 The Act is amended by adding the following after section 233: Contravention of section 158.13 233.1 Every cannabis licensee that contravenes section 158.13 is liable to a penalty equal to 200% of the greater of (a) the amount determined under section 1 of Schedule 7, as that section read at the time the contravention occurred, in respect of the cannabis products to which the contravention relates, and (b) the amount obtained by multiplying the fair market value, at the time the contravention occurred, of the cannabis products to which the contravention relates by the percentage set out in section 4 of Schedule 7, as that section read at that time. 16 (1) Subsection 234(1) of the Act is replaced by the following: Contravention of section 38, 40, 49, 61, 62.1, 99, 149, 151 or 158.15 234 (1) Every person that contravenes section 38, 40, 49, 61, 62.1, 99, 149, 151 or 158.15 is liable to a penalty of not more than$25,000.
(2) Section 234 of the Act is amended by adding the following after subsection (2):
Failure to comply
(3) Every person that fails to return or destroy stamps as directed by the Minister under paragraph 158.07(b) is liable to a penalty of not more than $25,000. (3) Subsection 234(3) of the Act, as enacted by subsection (2), is replaced by the following: Failure to comply (3) Every person that fails to return or destroy stamps as directed by the Minister under paragraph 158.07(b), or that fails to re-work or destroy a cannabis product in the manner authorized by the Minister under section 158.16, is liable to a penalty of not more than$25,000.
17 The Act is amended by adding the following after section 234:
Contravention of section 158.02, 158.1, 158.11 or 158.12
234.1 Every person that contravenes section 158.02, that receives for sale cannabis products in contravention of section 158.1 or that sells or offers to sell cannabis products in contravention of section 158.11 or 158.12 is liable to a penalty equal to 200% of the greater of
(a) the amount determined under section 1 of Schedule 7, as that section read at the time the contravention occurred, in respect of the cannabis products to which the contravention relates, and
(b) the amount obtained by multiplying the fair market value, at the time the contravention occurred, of the cannabis products to which the contravention relates by the percentage set out in section 4 of Schedule 7, as that section read at that time.
18 (1) Paragraph 238.1(1)(a) of the Act is replaced by the following:
(a) the person can demonstrate that the stamps were affixed to tobacco products, cannabis products or their containers in the manner prescribed for the purposes of the definition stamped in section 2 and that duty, other than special duty, has been paid on the tobacco products or cannabis products; or
(2) Subsection 238.1(2) of the Act is replaced by the following:
Amount of the penalty
(2) The amount of the penalty for each excise stamp that cannot be accounted for is equal to
(a) in the case of a tobacco stamp, the duty that would be imposed on a tobacco product for which the stamp was issued under subsection 25.1(1); or
(b) in the case of a cannabis stamp, five times the dollar amount set out in paragraph 1(a) of Schedule 7.
19 (1) The portion of section 239 of the Act before paragraph (a) is replaced by the following:
Other diversions
239 Unless section 237 applies, every person is liable to a penalty equal to 200% of the duty that was imposed on packaged alcohol, a tobacco product or a cannabis product if
(2) Paragraph 239(a) of the French version of the Act is replaced by the following:
a) elle a acquis l'alcool emballé ou le produit et les droits n'étaient pas exigibles en raison du but dans lequel elle les a acquis ou de leur destination;
20 Section 264 of the Act is replaced by the following:
Certain things not to be returned
264 Despite this Act, any alcohol, specially denatured alcohol, restricted formulation, raw leaf tobacco, excise stamp, tobacco product or cannabis product that is seized under section 260 must not be returned to the person from whom it was seized or any other person unless it was seized in error.
21 Subsection 266(2) of the Act is amended by striking out "and" at the end of paragraph (c), by adding "and" at the end of paragraph (d) and by adding the following after paragraph (d):
(e) a seized cannabis product only to a cannabis licensee.
22 (1) Subsection 304(1) of the Act is amended by adding the following after paragraph (c):
(c.1) respecting the types of security that are acceptable for the purposes of subsection 158.03(3), and the manner by which the amount of the security is to be determined;
(2) Paragraph 304(1)(f) of the Act is replaced by the following:
(f) respecting the information to be provided on tobacco products, packaged alcohol and cannabis products and on containers of tobacco products, packaged alcohol and cannabis products;
(3) Paragraph 304(1)(n) of the Act is replaced by the following:
(n) respecting the sale under section 266 of alcohol, tobacco products, raw leaf tobacco, specially denatured alcohol, restricted formulations, or cannabis products seized under section 260;
23 The Act is amended by adding the following after section 304:
Definition of coordinated cannabis duty system
304.1 (1) In this section, coordinated cannabis duty system means the system providing for the payment, collection and remittance of duty imposed under any of sections 158.2 and 158.22 and subsections 158.25(2) and 158.26(2) and any provisions relating to duty imposed under those provisions or to refunds in respect of any such duty.
Coordinated cannabis duty system regulations — transition
(2) The Governor in Council may make regulations, in relation to the joining of a province to the coordinated cannabis duty system,
(a) prescribing transitional measures, including
(i) a tax on the inventory of cannabis products held by a cannabis licensee or any other person, and
(ii) a duty or tax on cannabis products that are delivered prior to the province joining that system; and
(b) generally to effect the implementation of that system in relation to the province.
Coordinated cannabis duty system regulations — rate flexibility
(3) The Governor in Council may make regulations
(a) prescribing rules in respect of whether, how and when a change in the rate of duty for a specified province applies (in this subsection and subsection (4) any such change in the rate of duty is referred to as the "rate flexibility"), including rules deeming, in specified circumstances and for specified purposes, the status of anything to be different than what it would otherwise be, including when duty is imposed or payable and when duty is required to be reported and accounted for;
(b) if a manner of determining an amount of duty is to be prescribed in relation to the coordinated cannabis duty system,
(i) specifying the circumstances or conditions under which a change in the manner applies, and
(ii) prescribing transitional measures in respect of a change in the manner, including
(A) a tax on the inventory of cannabis products held by a cannabis licensee or any other person, and
(B) a duty or tax on cannabis products that are delivered prior to the change; and
(c) prescribing amounts and rates to be used to determine any refund that relates to, or is affected by, the coordinated cannabis duty system, excluding amounts that would otherwise be included in determining any such refund, and specifying circumstances under which any such refund shall not be paid or made.
Coordinated cannabis duty system regulations — general
(4) For the purpose of facilitating the implementation, application, administration and enforcement of the coordinated cannabis duty system or rate flexibility or the joining of a province to the coordinated cannabis duty system, the Governor in Council may make regulations
(a) prescribing rules in respect of whether, how and when that system applies and rules in respect of other aspects relating to the application of that system in relation to a specified province, including rules deeming, in specified circumstances and for specified purposes, the status of anything to be different than what it would otherwise be, including when duty is imposed or payable and when duty is required to be reported and accounted for;
(b) prescribing rules related to the movement of cannabis products between provinces, including a duty, tax or refund in respect of such movement;
(c) providing for refunds relating to the application of that system in relation to a specified province;
(d) adapting any provision of this Act or of the regulations made under this Act to the coordinated cannabis duty system or modifying any provision of this Act or those regulations to adapt it to the coordinated cannabis duty system;
(e) defining, for the purposes of this Act or the regulations made under this Act, or any provision of this Act or those regulations, in its application to the coordinated cannabis duty system, words or expressions used in this Act or those regulations including words or expressions defined in a provision of this Act or those regulations;
(f) providing that a provision of this Act or of the regulations made under this Act, or a part of such a provision, does not apply to the coordinated cannabis duty system;
(g) prescribing compliance measures, including penalties and anti-avoidance rules; and
(h) generally in respect of the application of that system in relation to a province.
Conflict
(5) If a regulation made under this Act in respect of the coordinated cannabis duty system states that it applies despite any provision of this Act, in the event of a conflict between the regulation and this Act, the regulation prevails to the extent of the conflict.
Definition of cannabis duty system
304.2 (1) In this section, cannabis duty system means the system providing for the payment, collection and remittance of duty imposed under Part IV.1 and any provisions relating to duty imposed under that Part or to refunds in respect of any such duty.
Transitional cannabis duty system regulations
(2) For the purpose of facilitating the implementation, application, administration or enforcement of the cannabis duty system, the Governor in Council may make regulations adapting any provision of this Act or of the regulations made under this Act to take into account the making of regulations under the Cannabis Act or amendments to those regulations.
Retroactive effect
(3) Despite subsection 304(2), regulations made under subsection (2) may, if they so provide, be retroactive and have effect with respect to any period before they are made.
24 Schedule 7 to the Act is replaced by the following:
SCHEDULE 7
(Sections 2, 158.19, 158.21, 158.24 to 158.26, 158.3, 158.31, 158.33, 158.35, 218.1, 233.1, 234.1 and 238.1)
Duty on Cannabis Products
1 Any cannabis product produced in Canada or imported: the amount equal to the total of
(a) $0.50 per gram of flowering material included in the cannabis product or used in the production of the cannabis product, (b)$0.15 per gram of non-flowering material included in the cannabis product or used in the production of the cannabis product,
(c) $0.50 per viable seed included in the cannabis product or used in the production of the cannabis product, and (d)$0.50 per vegetative cannabis plant included in the cannabis product or used in the production of the cannabis product.
2 Any cannabis product produced in Canada: the amount obtained by multiplying the dutiable amount for the cannabis product by 5%.
3 Any imported cannabis product: the amount obtained by multiplying the value of the cannabis product by 5%.
4 Any cannabis product taken for use or unaccounted for: the amount obtained by multiplying the fair market value of the cannabis product by 5%.
Excise Tax Act
25 The definition excisable goods in subsection 123(1) of the Excise Tax Act is replaced by the following:
excisable goods means beer or malt liquor (within the meaning assigned by section 4 of the Excise Act) and spirits, wine, tobacco products and cannabis products (within the meaning assigned by section 2 of the Excise Act, 2001); (produit soumis à l'accise)
26 The portion of section 4 of Part VI of Schedule V to the Act before paragraph (a) is replaced by the following:
4 A supply of tangible personal property (other than excisable goods) made by way of sale by a public sector body where
27 Section 1 of Part III of Schedule VI to the Act is amended by adding the following after paragraph (a):
(b) cannabis products, as defined in section 2 of the Excise Act, 2001;
28 (1) Section 2 of Part IV of Schedule VI to the French version of the Act is replaced by the following:
2 La fourniture de graines et de semences (autres que les graines viables qui constituent du cannabis au sens du paragraphe 2(1) de la Loi sur le cannabis) à leur état naturel, traitées pour l'ensemencement ou irradiées pour l'entreposage, de foin, de produits d'ensilage ou d'autres produits de fourrage, fournis en quantités plus importantes que celles qui sont habituellement vendues ou offertes pour vente aux consommateurs, et servant habituellement d'aliments pour la consommation humaine ou animale ou à la production de tels aliments, à l'exclusion des graines, des semences et des mélanges de celles-ci emballés, préparés ou vendus pour servir de nourriture aux oiseaux sauvages ou aux animaux domestiques.
(2) Paragraph 2(a) of Part IV of Schedule VI to the English version of the Act is replaced by the following:
(a) grains or seeds (other than viable seeds that are cannabis as defined in subsection 2(1) of the Cannabis Act) in their natural state, treated for seeding purposes or irradiated for storage purposes,
29 Paragraphs 3.1(b) and (c) of Part IV of Schedule VI to the Act are replaced by the following:
(b) in the case of viable grain or seeds, they are included in the definition industrial hemp in section 1 of the Industrial Hemp Regulations made under the Controlled Drugs and Substances Act or they are industrial hemp for the purposes of the Cannabis Act; and
(c) the supply is made in accordance with the Controlled Drugs and Substances Act or the Cannabis Act, if applicable.
30 Paragraphs 12(b) and (c) of Schedule VII to the Act are replaced by the following:
(b) in the case of viable grain or seeds, they are included in the definition industrial hemp in section 1 of the Industrial Hemp Regulations made under the Controlled Drugs and Substances Act or they are industrial hemp for the purposes of the Cannabis Act; and
(c) the importation is in accordance with the Controlled Drugs and Substances Act or the Cannabis Act, if applicable.
31 Section 6 of Part I of Schedule X to the Act are replaced by the following:
6 Property, (other than advertising matter or excisable goods) that is a casual donation sent by a person in a non-participating province to a person in a participating province, or brought into a particular participating province by a person who is not resident in the participating provinces as a gift to a person in that participating province, where the fair market value of the property does not exceed $60, under such regulations as the Minister of Public Safety and Emergency Preparedness may make for purposes of heading No. 98.16 of Schedule I to the Customs Tariff. Amendments to Various Regulations Regulations Respecting Excise Licences and Registrations 32 (1) The portion of subsection 5(1) of the Regulations Respecting Excise Licences and Registrations before paragraph (a) is replaced by the following: 5 (1) For the purposes of paragraph 23(3)(b) of the Act, the amount of security to be provided by an applicant for a spirits licence, a tobacco licence or a cannabis licence is an amount of not less than$5,000 and
(2) Paragraph 5(1)(b) of the Regulations is replaced by the following:
(b) in the case of a tobacco licence or a cannabis licence, be sufficient to ensure payment of the amount of duty referred to in paragraph 160(b) of the Act up to a maximum amount of \$5 million.
Regulations Respecting the Possession of Tobacco Products That Are Not Stamped
33 The title of the Regulations Respecting the Possession of Tobacco Products That Are Not Stamped is replaced by the following:
REGULATIONS RESPECTING THE POSSESSION OF TOBACCO PRODUCTS OR CANNABIS PRODUCTS THAT ARE NOT STAMPED
34 The Regulations are amended by adding the following after section 1:
1.1 For the purposes of paragraph 158.11(3)(a) of the Excise Act, 2001, a person may possess a cannabis product that is not stamped if the person has in their possession documentation that provides evidence that the person is transporting the cannabis product on behalf of a cannabis licensee or, in the case of an industrial hemp by-product, an industrial hemp grower.
Stamping and Marking of Tobacco Products Regulations
35 The title of the Stamping and Marking of Tobacco Products Regulations is replaced by the following:
STAMPING AND MARKING OF TOBACCO AND CANNABIS PRODUCTS REGULATIONS
36 Paragraph 2(b) of the Regulations is replaced by the following:
(b) a tobacco product or a cannabis product is packaged in a prescribed package when it is packaged in the smallest package — including any outer wrapping that is customarily displayed to the consumer — in which it is normally offered for sale to the general public.
37 Subsection 4(2) of the Regulations is replaced by the following:
(2) For the purposes of paragraph 25.3(2)(d) of the Act, a prescribed person is a person who transports a tobacco excise stamp on behalf of a person described in paragraph 25.3(2)(a) or (b) of the Act.
(3) For the purposes of paragraph 158.05(2)(c) of the Act, a prescribed person is a person who transports a cannabis excise stamp on behalf of a person described in paragraph 158.05(2)(a) or (b) of the Act.
38 Subparagraphs 4.1(1)(a)(i) and (ii) of the Regulations are replaced by the following:
(i) the unaffixed tobacco excise stamps in the applicant's possession at the time of application, and
(ii) the tobacco excise stamps to be issued in respect of the application; and
39 The portion of section 4.2 of the Regulations before paragraph (a) is replaced by the following:
4.2 For the purposes of the definition stamped in section 2 of the Act and subsections 25.3(1) and 158.05(1) of the Act, the prescribed manner of affixing an excise stamp to a package is by affixing the stamp
Public Service Body Rebate (GST/HST) Regulations
40 Paragraph 4(1)(e) of the Public Service Body Rebate (GST/HST) Regulations is replaced by the following:
(e) excisable goods that are acquired by the particular person for the purpose of making a supply of the excisable goods for consideration that is not included as part of the consideration for a meal supplied together with the excisable goods, except where tax is payable in respect of the supply by the particular person of the excisable goods;
Terminology Changes
41 Every reference to "excise stamp" is replaced with a reference to "tobacco excise stamp", with any grammatical changes that the circumstances require, in the following provisions of the Excise Act, 2001:
(a) subsections 25.1(2) to (5);
(b) sections 25.2 to 25.4; and
(c) paragraph 25.5(a).
Consequential Amendments to Other Legislation
42 Consequential amendments to other legislation may be required as a result of sections 1 to 40.
Coming into Force
43 The following definitions apply for the purposes of sections 44 to 46.
Cannabis Act means Bill C-45, introduced in the 1st session of the 42nd Parliament and entitled An Act respecting cannabis and to amend the Controlled Drugs and Substances Act, the Criminal Code and other Acts. (Loi sur le cannabis)
commencement day has the same meaning as in section 152 of the Cannabis Act. (date de référence)
44 The following provisions come into force on the later of the day on which this Act receives royal assent and the day on which the Cannabis Act receives royal assent:
(a) sections 1 to 4;
(b) the headings before sections 158.01, 158.02, 158.17 and 158.19 of the Excise Act, 2001, as enacted by section 5;
(c) sections 158.01, 158.03 to 158.08, 158.14, 158.17, 158.18, 158.23, 158.24 and 158.27 to 158.35 of the Excise Act, 2001, as enacted by section 5; and
(d) sections 6 to 9, subsections 10(1) and 16(2) and sections 18, 20 to 33 and 35 to 41.
45 The following provisions come into force on commencement day:
(a) sections 158.02, 158.09 to 158.12, 158.15 and 158.16 of the Excise Act, 2001, as enacted by section 5; and
(b) subsection 10(2), sections 11 to 14, subsections 16(1) and (3) and sections 17, 19 and 34.
46 Sections 158.13, 158.19 to 158.22, 158.25, 158.26 of the Excise Act, 2001, as enacted by section 5, and section 15 come into force on the later of the day on which this Act receives royal assent and the day on which the Cannabis Act receives royal assent, but
(a) section 158.13 of the Excise Act, 2001, as enacted by section 5, and section 15 only apply to cannabis products that are entered into the duty-paid market on or after commencement day, including cannabis products that are delivered at any time to a purchaser for sale or distribution on or after commencement day;
(b) sections 158.19 and 158.2 of the Excise Act, 2001, as enacted by section 5, only apply to packaged cannabis products that are delivered to a purchaser on or after commencement day;
(c) sections 158.21 and 158.22 of the Excise Act, 2001, as enacted by section 5, only apply to cannabis products that are imported into Canada or released (as defined in the Customs Act) on or after commencement day;
(d) section 158.25 of the Excise Act, 2001, as enacted by section 5, only applies to cannabis products that are taken for use on or after commencement day; and
(e) section 158.26 of the Excise Act, 2001, as enacted by section 5, only applies to cannabis products that, on or after commencement day, cannot be accounted for as being in the possession of a cannabis licensee or in the possession of a person in accordance with subsection 158.11(3) or paragraph 158.11(5)(a) of that Act, as enacted by section 5. | {} |
Publications and preprints
• Squarefrees are Gaussian in short intervals.
• With O. Gorodetsky and A. Mangerel.
[arXiv]
• Moments of polynomials with random multiplicative coefficients. To appear, Mathematika.
• With J. Benatar and A. Nishry.
[arXiv]
• On the variance of squarefree integers in short intervals and arithmetic progressions. Geom. Funct. Anal. 31 (2021), 111--149.
• With O. Gorodetsky, K. Matomäki, and M. Radziwill.
[arXiv] [Journal]
• Sums of singular series and primes in short intervals in algebraic number fields.
• With V. Kuperberg and E. Roditty-Gershon.
[arXiv]
• Traces of powers of matrices over finite fields. Trans. Amer. Math. Soc. 374 (2021), 4579--4638.
• With O. Gorodetsky.
[arXiv] [Journal]
• Band-limited mimicry of point processes by point processes supported on a lattice. Ann. Appl. Probab. 31 (2021), no. 1, 351--376.
• With J. Lagarias.
[arXiv] [Journal]
• Higher correlations and the alternative hypothesis. Q. J. Math. 71 (2020), no. 1, 257--280.
• With J. Lagarias.
[arXiv] [Journal]
• The variance of the number of sums of two squares in $\mathbb{F}_q[T]$ in short intervals. Amer. J. Math. 143 (2021), no. 6, 1703--1745
• With O. Gorodetsky.
[arXiv] [Journal]
Code used for numerical graphs: [SAGE z-measures] [SAGE variance graphs][MATLAB progressions] [MATLAB intervals]
• The De Bruijn-Newman constant is non-negative. Forum Math. Pi 8 (2020).
• With T. Tao.
[arXiv] [Journal]
• A limiting characteristic polynomial for classical matrix ensembles. Ann. Henri Poincare 20 (2019), 1093--1119.
• With R. Chhaibi, E. Hovhannisyan, J. Najnudel, and A. Nikeghbali.
[arXiv] [Journal]
• The variance of divisor sums in arithmetic progressions. Forum Math. 30 (2018), no. 2, 269--293.
• With K. Soundararajan.
[arXiv] [Journal]
• On the distribution of Rudin-Shapiro polynomials and lacunary walks on $SU(2)$. Adv. Math. 320 (2017), 993--1008.
• [arXiv] [Journal] [Corrections to published version: pdf]
• Arithmetic functions in short intervals and the symmetric group. Algebra Number Theory. 12 (2018), no. 5, 1243--1279.
• [arXiv] [Journal]
• Sums of divisor functions in $\mathbb{F}_q[t]$ and matrix integrals. Math. Z. 288 (2018), no. 1-2, 167--198.
• With J. Keating, E. Roditty-Gershon, and Z. Rudnick.
[arXiv] [Journal]
• Tail bounds for counts of zeros and eigenvalues, and an application to ratios. Comment. Math. Helv. 92 (2017), no. 2, 311--347.
• [arXiv] [Journal]
• Bootstrapped zero density estimates and a central limit theorem for the zeros of the zeta function. Int. J. Number Theory 11 (2015), no. 7, 2087--2107.
• With Kenneth Maples.
[arXiv] [Journal]
• The covariance of almost-primes in $\mathbb{F}_q[T]$. Int. Math. Res. Not. IMRN. 2015 (2015), no. 14, 5976--6004.
• [arXiv] [Journal]
• Arithmetic consequences of the GUE conjecture for zeta zeros.
• [PDF]
• A central limit theorem for the zeroes of the zeta function. Int. J. Number Theory 10 (2014), no. 2, 483--511.
• [arXiv] [Journal] [Corrections to published version: pdf]
• Macroscopic pair correlation of the Riemann zeroes for smooth test functions. Q. J. Math. 64 (2013), no. 4, 1197--1219.
• [arXiv] [Journal]
• The statistics of the zeros of the Riemann zeta-function and related topics. Ph.D. Thesis, University of California, Los Angeles. (2013) 228pp. | {} |
## Saturday, May 18, 2019 ... //
### Australia: climate hysterical Labourists lose unlosable election
We have some great news coming from Australia. Just like Trump and Brexit-Leave were predicted to lose by the pollsters, the center-right coalition led by the current prime minister Scott Morrisson was predicted to comfortably lose the Australian federal election today. The pollsters were wrong in the Trump case, in the Brexit case, and they were wrong about Australia, too.
The pollsters were predicting at least a 52-to-48 edge for the Labour Party, relatively to the center-right coalition. In reality, counting the lawmakers (there are 151 in total), the center-right bloc won 74-to-66 or so, by more than ten percent (of the Labour Party's gain).
The winner, Mr Morrison, has already thanked "miracles he has always believed in" and the "quiet Australians" for the victory; the loser, Mr Shorten whose electorate was shortened relatively to the predictions, has already admitted defeat.
What's wonderful is that the Labour Party has defined the election as the second – after Finland – election that should be all about the climate hysteria. Look at the pre-election title Climate change to be decisive issue in Australian election chosen by the AFP and Al Jazeera. And the hysteria has lost!
Most people just dislike – or at least refuse to share – the climate hysteria. There is no reason for any worry related to the global climate, let alone for hysteria. The far left would-be elites are living in a social bubble that prevents them from seeing this simple point – and many other points. Most people don't take you and your pathetic propaganda seriously, comrades, and millions of people viscerally hate you. The nastier and more dishonest things you do, the more people will hate you. Sadly, you've been getting increasingly nasty and dishonest in recent years – and an increasing number of people was increasingly hating you.
The pollsters' failure starts to be a rule. They always predict that the leftist portion of the voters prevails – and the reality often refuses to comply with these predictions. There may be two basic explanations for this repeated anomaly. One of them is that the pollsters are deliberately trying to improve the predictions, in order to encourage the voters to join the stronger party. So the surveys may be manipulated by ideologically driven manipulators.
There is also a more innocent explanation which still proves that the leftists are doing some immoral things: People may be afraid of unmasking their opposition to the left-wing policies and politicians because the "overt" opponents of the creeping left-wing totalitarianism are being harassed in many societies – comparably to German Jews around 1935. At any rate, the ballots are so far secret so the truth often prevails.
The Australian leftists have gone fanatical on the climate issue – they were talking about "emergency" etc. They're not the only ones. Many other extreme leftists are becoming even more fanatical these days.
The Grauniad, the British left-wing daily, has also complied with recommendations by the Swedish psychologically unstable girl who is skipping classes and their new official policy strongly discourages "climate change".
No, they don't want the journalists to call it "climate hysteria" or "climate panic", terms used by all the sane people in the world. Instead, they want the journalists to talk about the "climate emergency, crisis, or breakdown". (Just a few years ago, some folks promoted "climate disruption" – that phrase didn't catch on.) "Global warming" should be renamed to "global heating" (Czech journalists were really puzzled about this particular arbitrary change, especially because Czech alarmists haven't invented a new translation that wouldn't sound silly). And of course, aside from some random changes of the terminology for "fish stocks", "biodiversity", and "wildlife", they also replace "climate skeptics" by the already standard phrases "climate deniers" or "climate science deniers".
It's really the climate alarmists who are denying the climate science but we have already gotten used to these insults and I am surely not the only one who is proud when he is described as a well-known "climate denier". The two words really means an "expert in the climate change issues".
The Grauniad generously says that the journalists aren't quite forbidden from using the old phrases such as "climate change" but they need to think twice and they will be frowned upon.
It's incredible that they're apparently incapable of realizing how clownish and obnoxious they look to everybody – even though not everybody has the courage to openly admit it. Do you remember how the climate fearmongers were "updating" the term "global warming" to "climate change"? The globe didn't really consistently warm so they had to use something ill-defined and "climate change" became the politically correct name of the hysterical pseudo-religious and pseudo-scientific movement of the extreme leftists.
Now, just a few years later, after the temperature of the globe has changed by some undetectable 0.05 °C, The Grauniad is telling their faithful that "climate change" is already pretty much politically incorrect as well – because it isn't sufficiently hysterical. Can't they see that the more often they change their linguistic restrictions and recommendations, and the more frequently they update their insults of the people who actually understand the basic dynamics of the climate, the less credible all their propaganda looks?
It seems impossible for me to have empathy for the people who are willing to openly associate themselves with this ludicrous pseudo-intellectual garbage – to publicly market themselves as brain-dead and unhinged sheep who are updating their own ludicrous politically correct vocabulary and lists of taboo words according to some centralized ideological-inkspillers-in-chief every few years in order to look even more pathetic than ever before.
Why don't you just give it up, leftist comrades? People are leaving manipulative newspapers like the Grauniad which are dying. Instead, they listen to the more credible people – and teenagers like Soph who will once decide which of you will get a life in prison. | {} |
dc.contributor.author de Lange, Sindre Eik dc.contributor.author Heilund, Stian Amland dc.date.accessioned 2019-09-18T06:31:52Z dc.date.available 2019-09-18T06:31:52Z dc.date.issued 2019-06-28 dc.date.submitted 2019-06-27T22:00:07Z dc.identifier.uri https://hdl.handle.net/1956/20845 dc.description.abstract The demographic challenges caused by the proliferation of people of advanced age, and the following large expense of care facilities, are faced by many western countries, including Norway (eldrebølgen). A common denominator for the health conditions faced by the elderly is that they can be improved through the use of physical therapy. By combining the state-of-the-art methods in deep learning and robotics, one can potentially develop systems relevant for assisting in rehabilitation training for patients suffering from various diseases, such as stroke. Such systems can be made to not depend on physical contact, i.e. socially assistive robots. As of this writing, the current state-of-the-art for action recognition is presented in a paper called Spatial Temporal Graph Convolutional Networks for Skeleton-Based Action Recognition'', introducing a deep learning model called spatial temporal graph convolutional network (ST-GCN) trained on DeepMind’s Kinetics dataset. We combine the ST-GCN model with the Robot Operating System (ROS) into a system deployed on a TurtleBot 3 Waffle Pi, equipped with a NVIDIA Jetson AGX Xavier, and a web camera mounted on top. This results in a completely physically independent system, able to interact with people, both interpreting input, and outputting relevant responses. Furthermore, we achieve a substantial decrease in the inference time compared to the ST-GCN pipeline, making the pipeline about 150 times faster and achieving close to real-time processing of video input. We also run multiple experiments to increase the model’s accuracy, such as transfer learning, layer freezing, and hyperparameter tuning, focusing on batch size, learning rate, and weight decay. en_US dc.language.iso nob eng dc.publisher The University of Bergen en_US dc.rights Copyright the Author. All rights reserved eng dc.title Autonomous mobile robots - Giving a robot the ability to interpret humanmovement patterns, and output a relevantresponse. en_US dc.type Master thesis dc.date.updated 2019-06-27T22:00:07Z dc.rights.holder Copyright the Author. All rights reserved en_US dc.description.degree Masteroppgave i informatikk en_US dc.description.localcode INF399 dc.description.localcode MAMN-PROG dc.description.localcode MAMN-INF dc.subject.nus 754199 fs.subjectcode INF399 fs.unitcode 12-12-0
| {} |
# 9.1 confidence interval for the population mean when the population standard deviation is known
## Presentation on theme: "9.1 confidence interval for the population mean when the population standard deviation is known"— Presentation transcript:
CHAPTER 9 ESTIMATING THE VALUE OF A PARAMETER USING CONFIDENCE INTERVAL
9.1 confidence interval for the population mean when the population standard deviation is known
9.1 Objectives Compute a point estimate of the population mean
Construct and interpret a confidence interval for the population mean assuming that the population standard deviation is known Understand the role of margin of error in constructing the confidence interval Determine the sample size necessary for estimating the population mean within a specified margin of error
Objective 1 Compute a Point Estimate of the Population Mean A point estimate is the value of a statistic that estimates the value of a parameter. For example, the sample mean, , is a point estimate of the population mean .
Parallel Example 1: Computing a Point Estimate
Pennies minted after 1982 are made from 97.5% zinc and 2.5% copper. The following data represent the weights (in grams) of 17 randomly selected pennies minted after 1982. Treat the data as a simple random sample. Estimate the population mean weight of pennies minted after 1982.
Solution The sample mean is The point estimate of is grams.
Objective 2 Construct and Interpret a Confidence Interval for the Population Mean
A confidence interval for an unknown parameter consists of an interval of values.
The level of confidence represents the expected proportion of intervals that will contain the parameter if a large number of different samples is obtained. The level of confidence is denoted (1-)·100%.
For example, a 95% level of confidence
(=0.05) implies that if 100 different confidence intervals are constructed, each based on a different sample from the same population, we will expect 95 of the intervals to contain the parameter and 5 to not include the parameter.
Point estimate ± margin of error.
Confidence interval estimates for the population mean are of the form Point estimate ± margin of error. The margin of error of a confidence interval estimate of a parameter is a measure of how accurate the point estimate is.
The margin of error depends on three factors:
Level of confidence: As the level of confidence increases, the margin of error also increases. Sample size: As the size of the random sample increases, the margin of error decreases. Standard deviation of the population: The more spread there is in the population, the wider our interval will be for a given level of confidence.
The shape of the distribution of all possible sample means will be normal, provided the population is normal or approximately normal, if the sample size is large (n≥30), with mean and standard deviation
Interpretation of a Confidence Interval
A (1-)·100% confidence interval indicates that, if we obtained many simple random samples of size n from the population whose mean, , is unknown, then approximately (1-)·100% of the intervals will contain . For example, if we constructed a 99% confidence interval with a lower bound of 52 and an upper bound of 71, we would interpret the interval as follows: “We are 99% confident that the population mean, , is between 52 and 71.”
Constructing a (1- )·100% Confidence Interval for , Known
Suppose that a simple random sample of size n is taken from a population with unknown mean, , and known standard deviation . A (1-)·100% confidence interval for is given by where is the critical Z-value. Note: The sample size must be large (n≥30) or the population must be normally distributed. Lower Upper Bound: Bound:
Parallel Example 3: Constructing a Confidence Interval
Construct a 99% confidence interval about the population mean weight (in grams) of pennies minted after Assume =0.02 grams.
Lower bound: = Upper bound: We are 99% confident that the mean weight of pennies minted after 1982 is between and grams.
Objective 3 Understand the Role of the Margin of Error in Constructing a Confidence Interval
where n is the sample size.
The margin of error, E, in a (1-)·100% confidence interval in which is known is given by where n is the sample size. Note: We require that the population from which the sample was drawn be normally distributed or the samples size n be greater than or equal to 30.
Parallel Example 5: Role of the Level of Confidence in the
Parallel Example 5: Role of the Level of Confidence in the Margin of Error Construct a 90% confidence interval for the mean weight of pennies minted after Comment on the effect that decreasing the level of confidence has on the margin of error.
Lower bound: = Upper bound: We are 90% confident that the mean weight of pennies minted after 1982 is between and grams.
Notice that the margin of error decreased from 0. 012 to 0
Notice that the margin of error decreased from to when the level of confidence decreased from 99% to 90%. The interval is therefore wider for the higher level of confidence. Confidence Level Margin of Error Interval 90% 0.008 (2.456, 2.472) 99% 0.012 (2.452, 2.476)
Parallel Example 6: Role of Sample Size in the Margin of Error
Suppose that we obtained a simple random sample of pennies minted after Construct a 99% confidence interval with n=35. Assume the larger sample size results in the same sample mean, The standard deviation is still = Comment on the effect increasing sample size has on the width of the interval.
Lower bound: = Upper bound: We are 99% confident that the mean weight of pennies minted after 1982 is between and grams.
Notice that the margin of error decreased from 0. 012 to 0
Notice that the margin of error decreased from to when the sample size increased from 17 to 35. The interval is therefore narrower for the larger sample size. Sample Size Margin of Error Confidence Interval 17 0.012 (2.452, 2.476) 35 0.009 (2.455, 2.473)
Objective 4 Determine the Sample Size Necessary for Estimating the Population Mean within a Specified Margin of Error
Determining the Sample Size n
The sample size required to estimate the population mean, , with a level of confidence (1-)·100% with a specified margin of error, E, is given by where n is rounded up to the nearest whole number.
Parallel Example 7: Determining the Sample Size
Back to the pennies. How large a sample would be required to estimate the mean weight of a penny manufactured after 1982 within grams with 99% confidence? Assume =0.02.
=0.02 E=0.005 Rounding up, we find n=
9.2 confidence interval for the population mean when the population standard deviation is unknown
9.2 Objectives Know the properties of Student’s t-distribution
2. Determine t-values 3. Construct and interpret a confidence interval for a population mean when the standard deviation is unknown.
Objective 1 Know the Properties of Student’s t-Distribution
Student’s t-Distribution
Suppose that a simple random sample of size n is taken from a population. If the population from which the sample is drawn follows a normal distribution, the distribution of follows Student’s t-distribution with n-1 degrees of freedom where is the sample mean and s is the sample standard deviation.
Compute and for each sample.
Parallel Example 1: Comparing the Standard Normal Distribution to the t-Distribution Using Simulation Obtain 1,000 simple random samples of size n=5 from a normal population with =50 and =10. Determine the sample mean and sample standard deviation for each of the samples. Compute and for each sample. Draw a histogram for both z and t.
Histogram for z Histogram for t
CONCLUSIONS: The histogram for z is symmetric and bell-shaped with the center of the distribution at 0 and virtually all the rectangles between -3 and 3. In other words, z follows a standard normal distribution. The histogram for t is also symmetric and bell-shaped with the center of the distribution at 0, but the distribution of t has longer tails (i.e., t is more dispersed), so it is unlikely that t follows a standard normal distribution. The additional spread in the distribution of t can be attributed to the fact that we use s to find t instead of . Because the sample standard deviation is itself a random variable (rather than a constant such as ), we have more dispersion in the distribution of t.
Properties of the t-Distribution
The t-distribution is different for different degrees of freedom. The t-distribution is centered at 0 and is symmetric about 0. The area under the curve is 1. The area under the curve to the right of 0 equals the area under the curve to the left of 0 equals 1/2. As t increases without bound, the graph approaches, but never equals, zero. As t decreases without bound, the graph approaches, but never equals, zero.
The area in the tails of the t-distribution is a little greater than the area in the tails of the standard normal distribution, because we are using s as an estimate of , thereby introducing further variability into the t- statistic. As the sample size n increases, the density curve of t gets closer to the standard normal density curve. This result occurs because, as the sample size n increases, the values of s get closer to the values of , by the Law of Large Numbers.
Objective 2 Determine t-Values
Parallel Example 2: Finding t-values
Find the t-value such that the area under the t-distribution to the right of the t-value is 0.2 assuming 10 degrees of freedom. That is, find t0.20 with 10 degrees of freedom.
Solution The figure to the left shows the graph of the
t-distribution with 10 degrees of freedom. The unknown value of t is labeled, and the area under the curve to the right of t is shaded. The value of t0.20 with 10 degrees of freedom is
Objective 3 Construct and Interpret a Confidence Interval for a Population Mean
Constructing a (1-)100% Confidence Interval for , Unknown
Suppose that a simple random sample of size n is taken from a population with unknown mean and unknown standard deviation . A (1-)100% confidence interval for is given by Lower Upper bound: bound: Note: The interval is exact when the population is normally distributed. It is approximately correct for nonnormal populations, provided that n is large enough.
Construct a 95% confidence interval for the bacteria count.
Parallel Example 3: Constructing a Confidence Interval about a Population Mean The pasteurization process reduces the amount of bacteria found in dairy products, such as milk. The following data represent the counts of bacteria in pasteurized milk (in CFU/mL) for a random sample of 12 pasteurized glasses of milk. Data courtesy of Dr. Michael Lee, Professor, Joliet Junior College. Construct a 95% confidence interval for the bacteria count.
NOTE: Each observation is in tens of thousand. So, 9. 06 represents 9
NOTE: Each observation is in tens of thousand So, 9.06 represents 9.06 x 104.
Lower bound: Upper The 95% confidence interval for the mean bacteria count in pasteurized milk is (3.52, 9.30).
9.3 confidence interval for a population proportion
9.3 Objectives Obtain a point estimate for the population proportion
Construct and interpret a confidence interval for the population proportion Determine the sample size necessary for estimating a population proportion within a specified margin of error
Objective 1 Obtain a point estimate for the population proportion
A point estimate is an unbiased estimator of the parameter
A point estimate is an unbiased estimator of the parameter. The point estimate for the population proportion is where x is the number of individuals in the sample with the specified characteristic and n is the sample size.
Parallel Example 1: Calculating a Point Estimate for the
Parallel Example 1: Calculating a Point Estimate for the Population Proportion In July of 2008, a Quinnipiac University Poll asked 1783 registered voters nationwide whether they favored or opposed the death penalty for persons convicted of murder were in favor. Obtain a point estimate for the proportion of registered voters nationwide who are in favor of the death penalty for persons convicted of murder.
Objective 2 Construct and Interpret a Confidence Interval for the Population Proportion
Sampling Distribution of
For a simple random sample of size n, the sampling distribution of is approximately normal with mean and standard deviation , provided that np(1-p) ≥ 10. NOTE: We also require that each trial be independent when sampling from finite populations.
Constructing a (1-)·100% Confidence
Interval for a Population Proportion Suppose that a simple random sample of size n is taken from a population. A (1-)·100% confidence interval for p is given by the following quantities Lower bound: Upper bound: Note: It must be the case that and n ≤ 0.05N to construct this interval.
Parallel Example 2: Constructing a Confidence Interval for a
Parallel Example 2: Constructing a Confidence Interval for a Population Proportion In July of 2008, a Quinnipiac University Poll asked 1783 registered voters nationwide whether they favored or opposed the death penalty for persons convicted of murder were in favor. Obtain a 90% confidence interval for the proportion of registered voters nationwide who are in favor of the death penalty for persons convicted of murder.
Solution and the sample size is definitely less than 5% of the population size =0.10 so z/2=z0.05=1.645 Lower bound: Upper bound:
Solution We are 90% confident that the proportion of registered voters who are in favor of the death penalty for those convicted of murder is between 0.61and 0.65.
Objective 3 Determine the Sample Size Necessary for Estimating a Population Proportion within a Specified Margin of Error
The sample size required to obtain a (1-)·100% confidence interval for p with a margin of error E is given by (rounded up to the next integer), where is a prior estimate of p. If a prior estimate of p is unavailable, the sample size required is
Parallel Example 4: Determining Sample Size
A sociologist wanted to determine the percentage of residents of America that only speak English at home. What size sample should be obtained if she wishes her estimate to be within 3 percentage points with 90% confidence assuming she uses the 2000 estimate obtained from the Census 2000 Supplementary Survey of 82.4%?
Solution E=
9.4 confidence interval for a population standard deviation
9.4 Objectives Find critical values for the chi-square distribution
Construct and interpret confidence intervals for the population variance and standard deviation
Objective 1 Find Critical Values for the Chi-Square Distribution
If a simple random sample of size n is obtained from a normally distributed population with mean and standard deviation , then has a chi-square distribution with n-1 degrees of freedom.
Characteristics of the Chi-Square Distribution
It is not symmetric. The shape of the chi-square distribution depends on the degrees of freedom, just like the Student’s t-distribution. As the number of degrees of freedom increases, the chi-square distribution becomes more nearly symmetric. The values of 2 are nonnegative; that is, values of 2 are always greater than or equal to 0.
Parallel Example 1: Finding Critical Values for the
Parallel Example 1: Finding Critical Values for the Chi-Square Distribution Find the chi-square values that separate the middle 95% of the distribution from the 2.5% in each tail. Assume 18 degrees of freedom.
Solution Find the chi-square values that separate the middle 95% of the distribution from the 2.5% in each tail. Assume 18 degrees of freedom. 20.975= 20.025=
Objective 2 Construct and Interpret Confidence Intervals for the Population Variance and Standard Deviation
A (1-)·100% Confidence Interval for 2
If a simple random sample of size n is taken from a normal population with mean and standard deviation , then a (1-)·100% confidence interval for 2 is given by Lower bound: Upper bound: Note: To find a (1-)·100% confidence interval for , take the square root of the lower bound and upper bound.
Parallel Example 2: Constructing a Confidence Interval for a
Parallel Example 2: Constructing a Confidence Interval for a Population Variance and Standard Deviation One way to measure the risk of a stock is through the standard deviation rate of return of the stock. The following data represent the weekly rate of return (in percent) of Microsoft for 15 randomly selected weeks. Compute the 90% confidence interval for the risk of Microsoft stock. Source: Yahoo!Finance
Solution A normal probability plot and boxplot indicate the data is approximately normal with no outliers. s= ; s2= 20.95= and 20.05= for 15-1=14 degrees of freedom Lower bound: Upper bound: We are 90% confident that the population standard deviation rate of return of the stock is between and
9.5 CONCLUSION Determine the appropriate confidence interval to construct
Download ppt "9.1 confidence interval for the population mean when the population standard deviation is known"
Similar presentations | {} |
# Default value for a state created by QuantumRegister
What's the default value for a state created by QuantumRegister(1,'name_of_the_register')? Is it a $$|0\rangle$$ or a $$|1\rangle$$?
## 1 Answer
Here's the source code for quantumregister.py and quantumcircuit.py.
The default is $$|0\rangle$$. The code goes like:
from qiskit import QuantumCircuit, QuantumRegister
qr = QuantumRegister(1)
circuit = QuantumCircuit(qr)
By the way, if you're just beginning with Qiskit, you could check out Dr. Moran's textbook (this specific example is covered in chapter 5, ~p. 83). | {} |
# How do I show that this function is always $> 0$
Show that $$f(x) = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!} > 0 ~~~ \forall_x \in \mathbb{R}$$
I can show that the first 3 terms are $> 0$ for all $x$:
$(x+1)^2 + 1 > 0$
But, I'm having trouble with the last two terms. I tried to show that the following was true:
$\frac{x^3}{3!} \leq \frac{x^4}{4!}$
$4x^3 \leq x^4$
$4 \leq x$
which is not true for all $x$.
I tried taking the derivative and all that I could ascertain was that the the function became more and more increasing as $x \rightarrow \infty$ and became more and more decreasing as $x \rightarrow -\infty$, but I couldn't seem to prove that there were no roots to go with this property.
-
Use the result on the first two terms you have and see that $f^{'''}(x)>0$. Further when $f^{'''}(x)>0$, then $f^{''}(x)$ is always increasing, so is $f^{''}(x)$ and last but not least $f(x)$. – draks ... Jun 9 '12 at 16:16
Or use the solutions of the Quartic polynomial and check whether it has real roots. – draks ... Jun 9 '12 at 16:29
The argument of Andrew Salmon works if we truncate the series for $e^x$ at any even exponent, call that $f_{2n}(x).$ The trick is that $f_{k}' = f_{k-1},$ so $f_k(x) = f_{k-1}(x) + \frac{x^k}{k!}.$ Back to $k$ even, we get $$f_{2n}(x) = f_{2n}'(x) + \frac{x^{2n}}{(2n)!}.$$ At any local minimum, the derivative is $0.$ As the minimum does not occur at $x=0,$ the minimum is positive. Some of this involves induction on $n.$ – Will Jagy Jun 9 '12 at 18:51
Hint: $$f(x) = \frac{1}{4} + \frac{(x + 3/2)^2}{3} +\frac{x^2(x+2)^2}{24}$$
-
Oh! Very nice and short. I'd +1 twice if I could. – Gigili Jun 9 '12 at 16:56
Completing the square solves everything that ever existed! – stariz77 Jun 9 '12 at 17:13
@stariz77, yes, it is true for single-variable polynomials but not for several variables: en.wikipedia.org/wiki/Hilbert's_seventeenth_problem – lhf Jun 9 '12 at 19:03
$f$ is a polynomial, and therefore, is differentiable at all points. Furthermore, as $x\to\infty$ or $x\to-\infty$, $f(x)\to+\infty$. Thus, if $f(x)\le0$ for some $x$, then $f(x)\le0$ for some relative minimum.
$$f'(x)=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}$$
$f'(x)=0$ for all relative minima. However, if $f'(x)=0$, then $$f(x)=f'(x)+\frac{x^4}{4!}$$
Thus, $f(x)=\frac{x^4}{4!}>0$ for all relative minima $x\not=0$. $x=0$ is not a relative minimum, because $f'(0)\not=0$, so this equation holds for all relative minima of $f$. This contradicts our assumption, so $f(x)>0$ for all $x\in \mathbb R$.
-
+1 Nice, but I'd edit the answer, pointing out simply: since $$f(x)=f'(x)+\frac{x^4}{4!}\,\,\forall\,x\in\mathbb{R}$$, if we have a minimum at $\,x_0\,$ then $$f(x_0)=f'(x_0)+\frac{x_0^2}{4!}=\frac{x_0^2}{4!}>0$$as clearly $\,x_0\neq 0\,$ , which contradicts our initial assumption that $\,f\leq 0\,$ at some minimum... – DonAntonio Jun 9 '12 at 17:01
@DonAntonio We can polish it up even more, the contradiction is not required, the entire answer could be: $x=0$ is not a minimum, so if $x_0$ is a minimum then $f(x_0) = f'(x_0) + x^4_0/4! > 0.$ So $f> 0.$ – Ragib Zaman Jun 9 '12 at 17:45
This argument shows that, if we truncate the series for the exponential ending with any even exponent, the resulting polynomial is always positive. And, by induction, positive second derivative and a single global minimum. – Will Jagy Jun 9 '12 at 18:20
You have had some good ideas so far. You tried to see when this was true: $$\frac{x^3}{3!} \leq \frac{x^4}{4!}.$$
You rearranged this to $4x^3\leq x^4$ but you made an incorrect conclusion when you divided by $x^3$ (if $x<0$ then the inequality sign should flip). Instead, lets divide by $x^2$ to get $4x \leq x^2$ or $x(x-4)\geq 0.$ This is true when $x\leq 0$ or $x\geq 4$ so the desired inequality is true in that range.
For $0< x < 4$ we don't have $\frac{x^3}{3!} \leq \frac{x^4}{4!}$ but lets see if the other terms can save us. To do this, we need to see exactly how large $g(x) = x^3/3! - x^4/4!$ can be in $(0,4).$ We calculate that $g'(x) = -(x-3)x^2/6$ so $g$ increases when $0\leq x\leq 3$, the maximum occurs at $g(3)=9/8$, and then it decreases after that.
This is good, because the $1+x+x^2/2$ terms obviously give at least $1$ from $x=0$, and will give us more as $x$ gets bigger. So we solve $1+x+x^2/2=9/8$ and we take the positive solution which is $\frac{\sqrt{5}-2}{2} \approx 0.118.$ So the inequality is definitely true for $x\geq 0.12$ because $g$ is at most $9/8$ and $1+x+x^2/2$ accounts for that amount in that range.
Remember that $g$ was increasing between $x=0$ to $x=3$, so the largest $g$ can be in the remaining range is $g(0.12) = 873/315000 <1$, which is less than the amount $1+x+x^2/2$ gives us. So the inequality is also true for $0\leq x\leq 0.12$, so overall, for all $x.$
So all in all, the only trouble was for $x$ in $(0,4)$ and the contribution from the other terms was always enough to account for $x^3/3!$ when $x^4/4!$ wasn't enough.
-
Observe that
$$e^x = f(x) + \frac{x^5}{5!} + \cdots$$
Then show that for $x<0$ $$f(x) > \frac{x^5}{5!} + \cdots$$
But $e^x > 0, \forall x\in R$
So, $$f(x) > e^x - f(x) \Rightarrow 2f(x)>e^x>0$$
-
Well, showing $\,\displaystyle{f(x)>\frac{x^5}{5!}}\,$ may prove to be tricky... – DonAntonio Jun 9 '12 at 16:45
Well, not so much (unless I'm mistaken) but I don't want to give away the answer. – Eelvex Jun 9 '12 at 16:49
Well, perhaps I'm wrong...I'll expect your posting on this after some time has ellapsed. – DonAntonio Jun 9 '12 at 16:52
$f(x)\gt\frac{x^5}{5!}\dots$ is incorrect. Think of a sufficiently large $x$. This would imply that a polynomial function can be strictly greater than an exponential one. – Andrew Salmon Jun 9 '12 at 16:52
@DonAntonio Sure :) – Eelvex Jun 9 '12 at 16:53 | {} |
# Tag Info
8
The answer by @NowIGetToLearnWhatAHeadIs is correct. It's worth learning the language used therein to help with your future studies. But as a primer, here's a simplified explanation. Start with your charge distribution and a "guess" for the direction of the electric field. As you can see, I made the guess have a component upward. We'll see shortly why ...
8
You have to realize that the system is invariant under rotations about the normal to the plane. Then then electric field must also be invariant under these rotations. An electric field component in the plane does change under such a rotation, so such a component must not exist if we have this invariance. Thus the electric field is purely along the normal to ...
6
As the very formulation of your question makes clear, we know what the actual algebra of local symmetries is. It is the five-dimensional diffeomorphism invariance assuming the $M^4\times S^1$ topology of the five-dimensional spacetime. The term "Kač-Moody generalization of an algebra" is nothing else than an alternative name for this algebra, especially for ...
3
Group actions in classical field theory. Let a classical theory of fields $\Phi:M\to V$ be given, where $M$ is a base" manifold and $V$ is some target space, which, for the sake of simplicity and pedagogical clarity, we take to be a vector space. Let $\mathscr F$ denote the set of admissible field configurations, namely the set of admissible functions ...
2
All of those charges on the other side of the sphere "conspire" to exactly cancel the field of the nearby charges on the surface. You're analysis is correct. This result can also be shown by integration of Coulomb's Law, but it's not an easy calculation.
1
About the supposed paradox: $u$ and $\bar d$ have the same isospin quantum numbers, but not all the other properties. If you restrict your problem to only study the isospin space, you will not see that they have different charge and other different quantum numbers. About the charge: I don't know where your equation comes from, but it seems close to the ...
1
By applying Gauss' Law one gets (the surface integral over the sphere with $r>R$): $$\oint_s \vec{E}(r, \theta, \phi) \cdot \hat{n}(r, \theta, \phi) \, ds = \oint_s E(r, \theta, \phi)\, ds = \iiint E(r, \theta, \phi)\, r^2 \sin\theta \, d\theta \, d\phi = E4{\pi}r^2$$ The surface integral depends only on $r$ and is equal to the area of the sphere. $E$ ...
1
You can say this by spherical symmetry argument. All points with given $R$ on a sphere are equivalent. Now if $E$ depends on $\theta$, then it is different in different directions. So if someone says that $E$ is more at an angle of $30^0$, then your reply would be: "why not $45^0$ or $70^0$. What is so special about $30^0$." And that is it. There is ...
1
Emmy Noether proved both the theorem and its converse. Look for the book "The Noether Theorems" for a precise and discussed formulation of her statements, as well as a translation of the original paper. It seems there is a link to the pdf in the princeton math website (I don't know about copyright issues, however).
1
Pick a point above the plane. From a point in the plane directly under the point above, draw a circle of some radius. Consider the contribution of the charge elements along the circle to the electric field at the point above the plane. Since the charge density is uniform, the horizontal components of the electric field from charge elements on opposite ...
1
An answer connected to Gauss law (I hope everything is correct, since it's long ago for me ... so no warranty): An infinite plane of uniform charge for example in the z-plane has the charge distribution: $\rho=q\,\delta(z)$ Thus, the electrostatic potential should be $\Phi=\frac{q\,|z|}{2\pi}$. Hence, the electric vectorfield is: ...
1
The best way to make intuitive, is to draw columb force vecrtor from any chosen point of the sphere, to the point you measuring the field in, then project this vector on three chosen coordinates (it's better to choose spherical one) , then chose the oposite point (relative to sphere radios) and do the same projection, and u will see how magically, all ...
Only top voted, non community-wiki answers of a minimum length are eligible | {} |
# Time Levels
Summary: lays out a skill leveling system based on time. I don’t endorse it, but maybe you’ll have fun thinking about it.
Back in 201X[1], I was thinking about how to convey mastery of a skill.
For example, at some point I was talking to a young kid that also played the violin. The kid was playing through the Suzuki instruction program, and since he knew I also played the violin, they asked me what Suzuki book (1-10) I was working from at the moment. For context, I was in late high school/college, and had barely touched any of the Suzuki books (since I didn’t go through the program). And, the last book just contains Mozart’s Concerto 4, which is not difficult in the grand scheme of things[2].
# Methodology
My methodology was super non-rigorous, politically biased, and not terribly in-depth[8].
1. I drew from notes accumulated over the past year, which I made whenever I ran across a possible tax-related non-profit.
2. I searched Google for other sources I may not have stumbled across randomly, and were prominent enough to be found with search terms like “tax nonprofit”.
3. I searched Charity Navigator for charities related to taxes, skimming the list and cherry picking the ones I thought looked good.
Once I got a list of charities, I tried to answer some questions:
• Are they a 501(c)(3) or a 501(c)(4)? Remember, donations to one are tax deductible, donations to the other aren’t.
• What sort of work did they do? As we’ll see, there are some non-profits doing noble work, but it’s work that isn’t addressing the root causes.
• What do their financials look like? For example, if they’re sitting on lots of cash, it’s less pressing to donate to them.
• Do their goals align with mine?
Ideally, I would do a quasi-GiveWell-ian impact analysis, convert everything to something like QALY/$but for something like “policy movement/$”[9], and figure out which charities were doing the best work and had funding shortfalls. However, I neither have the skill nor time to do that, and mumbles something about perfect being the enemy of good.
# What I’m donating to
## Tax Policy Center (TPC)
• Both parent institutes are 501(c)(3), so donations are tax deductible.
• They do mainly do modeling work, as well as produce educational materials. I like the high level suggestions in 10 ways to simplify the tax system because they acknowledge that there are trade offs to these simplifications. They also do bog-standard economic education, like how plastic bag taxes work, but an educational center can’t live on more advanced concepts alone.
• The TPC is a… joint sub-institute?… of the Urban Institute and Brookings Institute. Both are well off: The Urban Institute has $101M in income (Charity Navigator entry), and the Brookings Institute has$108M in income (Wikipedia, 2016; weirdly, couldn’t find them on Charity Navigator). Unfortunately, it’s not clear how donations/assets are allocated to the TPC specifically.
• The materials produced by the TPC are not clearly partisan: their reporting on the TCJA was remarkably even handed. They do mention return-free tax filing (12), but it doesn’t appear to be a core pillar of their agenda. This isn’t such a big problem, because no one makes return-free filing a core part of their agenda. Additionally, from Wikipedia both parent institutes are regarded as not especially partisan.
So, the TPC seems to be doing even-handed policy analysis work, with the downside that it is hosted by institutions already well funded relative to other charities.
TPC’s donation page; you may need to specify that you are earmarking a donation for the TPC when donating to either parent institute.
## Institute on Taxation and Economic Policy (ITEP)
In short, ITEP is more focused specifically on tax policies, with a progressive bent I generally support, and fewer resources than TPC (especially CTJ[11]).
# Tax Help
## Tax Aid, Community Tax Aid
There’s a class of 501(c)(3) organizations focused on helping local low income folk fill out their tax returns. It’s a noble cause, but it’s not getting at the root of the problem, which is that they need to fill out returns at all.
# Wut-level Charities
These are charities that are confusing in some way, or have goals inimical to mine.
## Tax Analysts
• Tax Analysts are a 501(c)(3) organization.
• They produce tax analysis briefs, as their name implies.
• For a tax-focused charity, they have a ton of money: $68M in assets, and$48M in income (Charity Navigator).
• The briefs and positions within seem even handed, so my problem is not with the position of the charity[12]. No, it’s with Tax Notes: it appears that the charity is somehow linked to a subscription service for tax briefs, which is provided to tax professionals and other parties interested enough to pony up thousands of dollars for analyses. For example, “Tax Notes Today” is 2,500 annually. I’m confused about whether Tax Notes is feeding into the Tax Analysts income above, which would make sense given their large asset pool. Working under that assumption, it seems like Tax Analysts don’t need my money. ## Americans for Tax Reform (ATR) This is one of the first results that show up if you search for “tax charity”. However, the ATR’s main (only?) goal is to lower taxes, period. Then, their Taxpayer Protection Pledge page is full of GOP pull quotes, making it clear who their demographic is, as if the banner “4,000,000 Americans (and counting) will receive Trump Tax Reform Bonuses” (complete with “Click here to see the employers paying bonuses!”) wasn’t clear enough[13]. I mean, I guess the maniacal focus on LOWER TAXES is refreshing in its clarity, but that’s not what I want. ## Tax Foundation The Tax Foundation is a 501(c)(3), and is correspondingly less on the nose about their target demographic than ATR. However, there are some clear indicators which way the Tax Foundation leans: the article “Tax Reform Isn’t Done” talks about making provisions of the recent Tax Cut and Jobs Act permanent[14], and their donation page has a pull quote from Mike Pence. ## Tax Council Policy Institute (TCPI) So the TCPI is a 501(c)(3) (Charity Navigator), but there’s no donation page on their website. What? What charity doesn’t want your money? Looking at their about page makes it clear that the TCPI is affiliated with The Tax Council, and on their home page is the quote “Our membership is comprised of (but not limited to) Fortune 500 companies, leading accounting and law firms, and major trade associations.” Which makes it clear that they don’t need your dinky public donation, because they have industrial support. Even if they did accept donations, a part of The Tax Council’s mission is “… contributing to a better understanding of complex and evolving tax laws…” with nary a note about simplifying those tax laws, or at least simplifying how people do their taxes. [1] What I should have done is look for other people trying to answer the same question, especially in an EA style. I did not do this, partly because honestly, I didn’t really expect a strong showing, and partly because I had just finished doing my taxes and I didn’t want to keep doing research. I would appreciate it if you let me know about stronger posts/guides on this topic. [2] But is it ever impossible for me to not accept bad premises? [3] This is a little disingenuous: I expect most people have simpler returns than I do. If I only had one W-2, I would have spent much less time on my taxes. [4] That said, I would be shocked if the balance of evidence worked out that return-free filing was negative for American citizens. [5] Not using the tax prep industry seems like an obvious first step, except I would be piling a lot of suffering on myself for little gain. I usually check my federal return numbers with Excel 1040, but this year is the first time I couldn’t get my tax return within the same ballpark as the numbers given by the tax prep software. I could have sacrificed a weekend to figure out what was going on, but fuck that. [6] This is your periodic reminder that action space is really wide, and doing the lazy thing is sometimes much less effective than doing any direct action. [7] “We never said that the effects would be bad“, seems to be the implied response to people charging it with partisan mongering. [8] I’ll probably spend more time writing and editing this post than I will have spent on actual research. [9] Yes, QALYs are weird, and the GiveWell approach is vulnerable to the streetlight effect. Understood. [10] The article also references Elizabeth Warren’s return-free bill, which does raise a question about why I don’t donate directly Elizabeth Warren. I remember her advocating for weird policies, but apparently my go-to dumb policy I thought she backed was her anti-vax position, which was either blown out of proportion or reversed at some point. So, basically no reason. [11] Unfortunately they don’t seem to have their IT locked down tight, since I found a almost certainly surreptitious CoinHive install on their site. [12] Even if the articles can be inanely focused on the minutia of policy: when I was doing my research, the Tax Analysts featured articles list was full of articles about the grain glitch, a tax loophole. [13] Plug for Sarah’s post about the intertwining of politics and aesthetics, Naming the Nameless, which partially explains why the ATR uses language usually reserved for last generation click bait and aggressive ads. [14] I haven’t really been following along with the TCJA, and don’t have a strong opinion on the specific policy changes, so it’s more of a gut-level identity-based dislike of the support of the TCJA. Yes, yes, this is why we can’t have nice things. # Review/Rant: The Southern Reach Trilogy Warnings: contains spoilers for AnnihilationAuthorityAcceptanceThe Quantum ThiefThe ExpanseThe LaundryverseDark Matter, and SCP (as much as SCP could be said to have spoilers). Discussion of horror works. Otherwise contains your regularly scheduled science fiction rant. I recently[1] blew through Jeff VanderMeer’s Annihilation/Authority/Acceptance series, also known as the Southern Reach Trilogy, which I’ll abbreviate to SRT. First things first: overall, it was pretty good. I enjoyed the writing, the clever turns of phrase (“Sheepish smile, offered up to a raging wolf of a narcissist.”[2]). It’s reasonably good at keeping up the tension, even while sitting around in bland offices with the characters politicking at each other. So the writing is alright, but the real draw was kind of the setting, kind of the story structure, kind of the subject matter. In a way, it’s right up my alley. It’s just a… weird alley. The most obvious weird is used as a driving force in the world building, forcing us reconsider what exactly we’re reading. Is this an environmental thriller? Kind of, but the environmental message is muted and bland, restricted to a repeated offhand remark “well, too bad the environment is fucked”. Is this an X-Files rip off? Kind of, but the paranormal is undeniable: you don’t want to believe it’s there, you want to believe there’s an explanation behind it all. Is this a romance? For the first book maybe, but with one of the pair entirely absent from the book[3]. The second book doesn’t help by introducing elements of the corporate thriller genre, and then axing any chance of finishing that transition by the end of the book. Whatever it is, all throughout SRT is world building, but shot through with twists and turns. It reminds me of those creepy dolly zooms (examples) which undermine the sense of perception, but applied to narrative. For example, the biologist and story at large constantly give up information that forces us to reconsider everything that came before: • By the way, my husband was part of the previous expedition. • By the way, there were way more expeditions than 12. • By the way, the danger lights don’t actually do anything[4]. • By the way, I (the biologist) am glowing. • By the way, the 12th expedition psychologist was the director of Southern Reach. • By the way, said director was in the lighthouse picture. • By the way, Central was involved in the Science and Seance Brigade. • Did I mention Control’s mom was in the thick of it? It’s sort of like Jeff is giving us an unreliable narrator with training wheels: we’re not left at any point with contradictory information, yet there’s a strong sense that our only line into the story is controlled by a grinning spin doctor. It’s an artful set of lies by omission. My suspicion is that I enjoyed this particular aspect of the SRT for the same reason I enjoyed The Quantum Thief trilogy. Hannu does a bit less hand holding[5], like starting the series with the infamous cold open “As always, before the warmind and I shoot each other, I try to make small talk. ‘Prisons are always the same, don’t you think?'”[6]. And as an example, the trilogy never explicitly lays out who the hell Fedorov is: in fact, I didn’t even expect him to be a real person, but his ideology (or the Sobornost’s understanding of his Great Common Task) was so constrained by the plot happening around it that I never had to leave the story and, say, search Wikipedia, which was excellent story crafting. Anathem is another book that does this sort of “fuck it we’ll do it live” sketching of a world to great effect[7]. But while The Quantum Thief is sprinting through cryptographic hierarchies and Sobornost copy clans, it’s still grounded in a human story. The master thief/warrior/detective tropes serve as a reassuring life vest while Hannu tries to drown us with future shock[8][9]. The SRT doesn’t need as much of a touch point, since we never leave Earth and bum around a mostly normal forest and a mostly normal office building[10], but the organizational breakdown in the expedition and Southern Reach agency are eminently relatable in the face of a much larger and stranger unfolding universe. Let’s unpack that unfolding universe. The world of SRT is weird: while The Quantum Thief is a fire hose, it only spews the literary equivalent of water, easily digestible and only murky in tremendous quantities. The SRT finishes with loose ends, the author at some point shrugging his shoulders and leaving a dripping plot point open for the spectacle of it, and that’s okay. It’s weird fiction. Another parallel: Solaris describes a truly alien world sized organism. What is it thinking? How does it think? How do you communicate with it? The story ends with all questions about the planet Solaris unresolved, with the humans only finding out that broadcasting EEG waves into the planet does something[11]. No men in rubber suits here, just an ineffable consciousness. Even a hungry planet makes more sense to us: at least it has visible goals that we can model (even if they are horrifying[12]). You end up with the same state in SRT: what is Area X doing? Why is it doing it? What the hell does the Markov chain sermon mean?[13] I’m guessing this is why people don’t like it: there are barely any answers at the end. How did turning into an animal and leaping through a doorway help at all? Did Central ever get their shit together? What’s up with the burning portal world? If you were expecting a knowable “rockets and chemicals” world, it’d be disorienting. In a way the story suffers a bit from a mystery box problem, where there are boxes that are never opened. However, in this case I think the unopened boxes are unimportant. Sure, the future of humanity is left uncertain, the mechanisms of Area X are still mysterious, but we know what happened to all the main characters, see how they played their parts and have some closure. (I am miffed that Joss Whedon is poisoning the proverbial storytelling well. Yes, mystery boxing makes economic sense, but now I see the mystery box like I hear the Wilheim scream, and it’s not pretty.) Okay, so we have a weird new world we explore, and weird fiction that is weird for the sake of being weird, but I’m neglecting the weird that gives people bad dreams. On one level there’s simple horror based on things going bump in the night: think about the moaning psychologist in the reeds, the slug/crawler able to kill those that interrupt its raving sermon. But that doesn’t show up in spades: the description of the 1st expedition disintegration cuts off after a sneak peak, omitting most of the ugly details. Jeff had plenty of opportunity to get into shock horror, and didn’t. I think that he wanted to instead emphasize the 2nd layer of Lovecraftian horror beyond the grasping tentacles, a horror driven by a tremendous and possibly/maybe/almost certainly malign world[14]. Area X pulls off simple impossible feats like time dilation and a barrier that transports things elsewhere (or nowhere). More concerning is the fact that Area X knows what humans look like. It’s an alien artifact, and somehow (something like the Integrated Information Theory of consciousness turns out to be right?) knows what makes up a human, recognizes them as special and in need of twisting, and can’t help but twist with powers beyond our understanding. There’s something large and unspeakably powerful stalking humanity, and it is hungry. Or maybe it’s not deliberately stalking humanity, and it’s just engaging sub-conscious level reactions, and everything it has done so far is the equivalent of rolling over in its sleep: how would Area X know it just rolled over a butterfly of an expedition? This implies a second question: what happens when it finally wakes up? It all reminds me of The Expanse series. Sure, there’s the radically simplified political/economic/military squabbling and made for action movie plot, but the protomolecule is what I’m thinking about. “It reaches out it reaches out it reaches out“: an entire asteroid of humans melted down for spare parts by the protomolecule are kept in abeyance for use, living and being killed again and again in simulation until the brute force search finds something useful happening (which in turns reminds me of the chilling line “There is life eternal in the Eater of Souls”.) Thousands die and live and die, all to check a cosmic answering machine. If we want to draw an analogy, the first level of horror draws from being powerless in the face of malign danger: think of the axe murderer chasing the cheerleader. The second level of horror draws from the entirety of humanity being powerless in the face of vast malign danger. Samuel L. Jackson can handle an axe murderer, but up against the AM from “I Have No Mouth and I Must Scream”? No contest[15]. (We could even go further, and think about the third level as malign forces of nature: Samuel L. Jackson vs the concept of existential despair might be an example, not on the level of “overcoming your inner demons” but “eradicating the concept as a plausible state of mind for humans to be in”[16]. Now that I think about it, it would have been an interesting direction to take The Quantum Thief’s All-Defector, fleshing it out as a distillation of a game theoretic concept like Moloch. Maybe there’s room for a story about recarving the world without certain malign mathematical patterns… well, maybe without religious overtones either.) But we’ve only been looking at what the rock of Area X has been doing to the humans. What about the hard place of the Southern Reach agency, and what they do to humans? The agency continually sends expeditions into a hostile world, getting little in return, and pulls stunts like herding rabbits into the boundary without rhyme or reason. In the face of failure to analyze, they can only keep sending people in, hoping that an answer to Area X will pop back out if they just figure out the right hyperparameter of “which people do we send?”. In other words: a questionably moral quasi-government agency, operating from the shadows to investigate and prepare to combat a unknown force that might destroy all of humanity? And as if it wasn’t close enough, the SRT throws in the line “What if containment is a joke?”, and I almost laughed out loud. It’s all a dead ringer for the Foundation in the SCP universe. A little background: SCP is one of those only-possible-with-the-internet media works[17], a collaborative wiki[18] detailing the workings of the Foundation, an extra-governmental agency with an international mandate to, well, secure, contain, and protect against a whole bevy of anomalous artifacts and entities. SCP. As is with wikis there is an enormous range of work: some case files detail tame artifacts (a living drawing), or problems solvable with non-nuclear heavy weapons (basically a big termite), or with nukes (a… living fatberg?), or something a 5-year old might come up with if you asked them to imagine the most scary possible thing (an invincible lizard! With acid blood!). And then there’s things a bit more disquieting. Light that converts biological matter to… something elseInfectious ideasAn object that can’t be described as it is, just as it is not (it’s definitely not safe).[19] Area X slots into this menagerie well, an upper tier threat to humanity. It’s utterly alien and unpredictable, actively wielding unknown amounts of power to unknown ends. With the end of SRT, it seems likely that an “XK Class End of the World scenario” is in progress, a real proper apocalypse pulling the curtains on humanity. On the other hand, the Southern Reach/Central agencies are vastly less competent at handling existential threats than the Foundation (this, despite a mastery of hypnosis the Foundation would kill for[20]). Part of it is the nonsensical strategy: for crying out loud, Central sends a mental weapon in to try and provoke Area X, and to what end? To hasten the end of the world? Then Lowry gaining control of the Area X project was absolutely atrocious organizational hygiene, a willful lack of consideration that contamination can go past biological bacteria and viruses, that the molecular assembly artifact under study can change your merely physical mind. An O5 Foundation overseer would have seen dormant memetic agents activate and rip through departments, and would take note of a field agent turned desk jockey that started accumulating more and more soft power in the branch investigating the same anomaly that nearly took his life… Back to the first hand, both works partly derive their horror from the collision of staid and sterile office politics with the viscerally supernatural. Drawing from the savanna approximation, we weren’t built to work in cubicles, and there were definitely no trolleys, much less trolley problems[21]. And office organizations are unnatural, but are the most effective way we’ve found to get a great many things done. So press the WEIRD but effective organizational tool into service to call the shots on constant high-velocity high-stakes moral problems, except it’s not people on the tracks but megadeaths, and you start to get at why it’s so unnerving to read interdepartmental memos about how to combat today’s supernatural horror[22]. And there’s the “sending people to their death” aspect of both organizations, which conflicts with their nominally scientific venture: at least no one pretends the military hierarchy is trying to discover some deeper truth when it sends people into battle. So the faceless bureaucracy expends[23] their people[24] to chart the ragged edges of reality[25], and gets dubious returns back. The Southern Reach gets a lighthouse full of unread journals, the Foundation usually just figures out yet another thing won’t destroy an artifact of interest. And as an honorable mention, the Laundryverse by Charlie Stross shares strong similarity to both works: Lovecraftian horrors are invokable with complicated math, the planets are slowly aligning, and world governments have created agencies to prepare for this eventuality, deal with “smaller” “supernatural” incidents, and find/house the nerds that accidentally discover “cosmic horror math”. This series focuses a bit more on the humorous side of office hijinks, and focuses on threats a bit more tractable to the human mind: at least many of the threats Bob faces can be hurt with the Medusa camera he carries around. If you want a taster into the Laundryverse, you could do worse than the freely available Tor stories (Down on the FarmOvertime[26], Equoid (gross!)[27], or the not-really-Laundryverse-but-pretty-damn-similar A Colder War[28], in which I remember Stross being inordinately pleased to include the line “so you’re saying we’ve got a, a Shoggoth gap?”. In the end, I wasn’t too entirely horrified: the best SCP has to offer rustled my jimmies more than Area X. And, the Laundryverse is somewhat more entertaining than the SRT. And Solaris does the “utterly alien”-alien a bit better. SRT, though, strikes a balance between all these concerns, and has much better writing quality than SCP, and fewer of the hangups that turned me off The Expanse[29]. But let me rant for a bit. On Goodreads Annihilation has an average 3.6 score. I personally don’t think it deserves such a low score, but a fair number of people were turned off by the characters, it’s not everyone’s cup of tea, okay sure fine. Dark Matter, a nominally science fiction novel, has a 4.1. 4.1! I only see acclaimed classics and amazing crowd favorites with those sorts of scores. The problem is that Dark Matter is FUCKING TERRIBLE. I know, I complained about this before (on my newsletter), and I’ll complain again, because it’s a fucking travesty that Annihilation got relegated to bargain bin scores compared with an utterly predictable story with trash science and characterization so bland doctors prescribe it when you are shitting your brains out due to a norovirus infection[30]. Maybe I can say it another way: Where lies the darkness that came from the hand of the writer I shall bring forth a fruit rotten with the tunnels of the worms that shine with the warmth of the flame of knowledge which consumes the hollow forms of a passing age and splits the fruit with a writhing of a monstrous absence which howl with worlds which never were and never will be. The forms will hack at the roots of the world and fell the tree of time which reveals the revelation of the fatal softness in the writer. All shall come to decide in the time of the revelation, and shall choose death[31] while the hand of the writer shall rejoice, for there is no sin in writing an action plot that the New York Times Bestseller list cannot forgive[32]. Again, a fucking travesty. Christ. [1] Not so recently by the time this post is published. I’m still a slow writer. [2] Okay, it’s a little too clever for it’s own good. [3] Surely there is Control/Grace rule 34. Or anyone/thousand-eye mutated Biologist. But as far as I know Biologist-husband is the only canon pairing. [4] I almost forgot these were a thing while reading Annihilation, so a quick refresher: “… a small rectangle of black metal with a glass-covered hole in the middle. If the hole glowed red, we had thirty minutes to remove ourselves to ‘a safe place.'”. [5] If you want a flavor of the info dump sort of style of The Quantum Thief, I recommend “Variations on an Apple” as an even more extreme example: I suspect that normal people feel the same way reading The Quantum Thief as when I first read that story. [6] Except where SRT slowly reveals the unnaturalness of the world, The Quantum Thief revels in it, fills the tub with weird and takes a luxurious bath. Like, it seems like Hannu tried really hard to get the “Toto, I don’t think we’re on Earth anymore” senses tingling right in the first sentence. [7] Well, if you’re willing to put up with/enjoy the made up words. [8] I mean, I do wonder if the author was too bad of a writer to pull off something less stereotypical while retaining the alien world, but maybe it was intentional. Sure, the writer has written some cringeworthy stuff (I never knew someone could string together the word “kawaii” so poorly), but that’s what the internet has given us, government officials with a publicly available teenager history. [9] Charlie Stross has more thoughts about drowning people with future shock as a genre, namely that it isn’t productive any longer because we’re already in a (future?) shocking world. [10] Breathing cafeteria wall notwithstanding. [11] Because EEG is somehow magical? Well, Solaris was written in the 1960s, so some amount of leeway is necessary. But even if you replace the EEG with some other brain state, you have to wonder what exactly Solaris would be doing with it… “Data can’t defend itself” and all that. [12] Another alternative is the cactus that doesn’t lift a finger to attain stated goals. [13] It turns out to be surprisingly understandable once you finish the trilogy, even if it reads like a digested the Old Testament. [14] Yeah, we’re ignoring the icky parts of Lovecraft. [15] I’m ignoring the fact that any movie plot would somehow have Samuel L. Motherfuckin’ Jackson end up the winner: it’s too bad that our widely known “tough guy” archetypes are all actors, which then implies the presence of Hollywood plot armor. [16] General memetic hazards might be another example: Roko’s Basilisk is a shitty example of one. [17] Other examples I know of are Football in the Year 17776 (previously), Deep Rising (a little less so, it’s just a comic+music), Homestuck (a little less so, it’s just a walls of text+animations), and every piece of interactive fiction: for example, Take (and spoiler-ific analysis). [18] It seems almost like a fandom that didn’t coalesce around an existing body of work/author, one that just birthed into the void without a clear seeding work. [19] This isn’t the best that SCP has to offer. It’s just that there’s so damn much of it, and it’s not like I’m keeping records on which pages are the best. [20] A good life heuristic: if the Foundation would kill to get some capability, maybe you should rethink trying to get that capability. [22] The dispassionate Foundation reports are effective at conveying the sense of wrongness. There’s a brutal rhythm to the uniform format, leaving a feeling that in order to fight the monsters out there we had to suppress our humanity until we became monstrous in our own way. [23] Interesting yet morbid comment: “Well, you were properly expended, Gus. It was part of the price.”. [24] New head canon (if such a thing could be considered to exist in the SCP-verse): the replication crisis was suppressed by the Foundation to maintain the facade of the Milgram obedience experiment, which is useful for subconsciously convincing D-class they will eventually follow orders. [25] Line stolen from qntm‘s Ra (chapter link). [26] The frame story is a bit eye roll inducing, but I understand a man’s gotta publish. [28] Home to my go-to chilling quotes “There is life eternal in the Eater of Souls” (previously referenced) and “Why is hell so cold this time of year?”. [29] Namely, the incredibly simplified politics and anti-corporation messages set up puppet villains that aren’t interesting: I’d be more into it if the trade offs were more nuanced. It’s still a good “Holden and friends fly around and have adventures” series, though. [30] The BRAT diet is bland for a reason: ask me how I know this! [31] No, not being emo here: the clones of the main character of Dark Matter (don’t make me look this up, please) end up choosing to fight each other because they can’t figure out functional decision theory. This would be fine, if the main character weren’t ostensibly eminent physics professor material. [32] Everything is based on some correspondence with what I actually mean, which fits with what Jeff VanderMeer also did with the original “strangling fruit” prose. # Making the Most of Bitcoin Epistemic status: I believe I’m drawing on common wisdom up to part 5. After that I’m just making shit up, but in a possibly interesting way. Not proper financial advice, see the end of the post. So let’s say you have some Bitcoin. What do you do with it? # #1. Cash out everything immediately Lots of people think putting your money in Bitcoin is a bad idea: Jack Bogle (founder of Vanguard)Warren BuffetRobert Shiller (Yale economic professor)Mr. Money MustacheJason Calacanis (Angel investor)[1]. I tend to agree with them[2], and am basically following this action by not buying in[3]. However, you (hypothetical Bitcoin holder) already knew that Bitcoin was widely thought to be not the greatest investment vehicle, and bought in anyways. You’re not going to immediately cash out, ok, fine, whatever. What else could you do? # #2. Become a HODLR You’re going to HODL the Bitcoin you have until it reaches THE MOON. It’s unclear what you’ll do once it reaches THE MOON. Maybe you’ll just slowly squander your satoshis on breeding Shiba Inus and kidnapping cryptography experts to ensure the sanctity of SHA-256. Or maybe one day you’ll end up with 99% of your net worth in Bitcoin, and the next day you’ll have 0% of your net worth in Bitcoin because your kidnapping orders were read incorrectly, and SHA-256 was demonstrably broken by vendetta-driven cryptographers overnight[4]. Also, the Iranians are really mad at you[5]. Another way of looking at it is that it’s difficult to make money slowly with Bitcoin: there are no fundamentals[6] to inexorably drive value, you can’t yell “gains through trade, buy ’em all and let the market sort em’ out!” and put your money in an index fund equivalent and then forget about it. The life of a HODLR is a life with a hell of a lot of volatility; maybe there’s a better strategy? # #3. Time the market The key is to buy low, sell high. This advice is approximately as useful as “be attractive, don’t be unattractive” labeled as dating advice. If you think you can beat the market, I’ll point you to all the rest of the brilliant ideas that have been tried and failed, and the anti-inductive nature of the market, and the seeming adequacy of liquid markets. If you still think you have a grand insight into market mechanics, the great thing is that you can go make a billion dollars if you’re right. Go on, and try to remember us little people. Besides, if I knew how to do this, would I be here telling you? I would be out playing with my Shiba herd instead. # #4. Recoup your investment This strategy has the virtue of simplicity: • Buy some Bitcoin. • Wait until the price of Bitcoin doubles. • Sell half your Bitcoin, making back your original “investment”. Now it’s not possible to be worse off than before. • … HODL? It’s nice to not lose money (as long as the market doesn’t crash out before you reach your doubled price), but you have one point at which you cash out, and then you’re back to not having any strategy. # #5. Rebalance Another strategy is to simply rebalance. A quick tutorial detour: let’s say there are only 2 investments in the world, boonds and stoocks[7]. Boonds are low risk, low reward, and stoocks are large risk, large reward. Let’s say you’re a young’un that has just entered the job market with1000 to put into the market, and have an appetite for risk in order to get good returns. That means taking on higher risk, but that’s okay since you’ll have plenty of years to rebuild if things go south. So you might go for a 90% stoocks, 10% boonds allocation, for $900 stoocks/$100 boonds.
Now let’s say that the market absolutely tanks tomorrow. Boonds don’t really change since they’re low risk; let’s say boonds take a 10% hit. But stoocks, man, they took a 95% hit. Now we’ve ended up $45 stoocks/$90 boonds, meaning our asset allocation is 33.3% stoocks/66.6% boonds. #1. This is super sad, we’ve lost a lot of money, but #2. This isn’t what we want at all! We have so many boonds that our risk of losing most of what we have is low, but our returns are also going to be super low. Besides, even if we do lose it all, we’ll make it back in salary over a few days.
So what we can do is rebalance: we sell our abundance of boonds, and buy more stoocks, until we have a 90% stoock/10% boond allocation again, which works out to $121.5 stoocks/$13.5 boonds[8].
To fill out the rebalancing example, now let’s say you’re older and about to retire. Over the years you’ve shifted your asset allocation to 10% stoocks/90% boonds with $100000 stoocks/$900000 boonds: this close to retirement, you’d be in a lot of trouble if most of your money disappeared overnight, so you want low risk.
Now let’s say stoocks do fantastically well tomorrow, growing 10000%, so you end up with $10000000 stoocks/$900000 boonds. The problem is that now your allocation by percentage is 91.7% stoocks/8.3% boonds, and you’re about to enter retirement. All your wealth is in a super-risky investment! Could your heart even handle the bottom of the market dropping out? Instead of letting that happen, you could rebalance back to 10% stoocks/90% boonds or $1090000 stoocks/$9810000 boonds[9].
What’s the moral of the story? If you have multiple asset risk classes, then you don’t have to put it all on black and ride the bubbles up and down like a cowboy: rebalancing is a simple strategy to target some amount of risk, and then you can just go long and not worry about the fine details.
There are finer details that do matter: you can’t rebalance Bitcoin often or you might get eaten alive by mining fees[10] (which peaked at an average of $50 when Bitcoin was around$10000). So maybe you’d target some large-ish percentage change and only rebalance once Bitcoin changes by that amount.
Let’s run some numbers: let’s say 1 Bitcoin is currently $1000, and you have exactly 1 bitcoin, and you rebalance only whenever Bitcoin doubles in price (this basically extends the previous “double and sell” strategy). Now if Bitcoin goes from$1000 to $10000, you would rebalance 3 times: when Bitcoin is$2000, $4000, and$8000. If you have many more assets than $1000, you can hand wave away the exact percentage calculations and just sell half the Bitcoin at each point. Even if Bitcoin crashes to$0.001 after reaching $10000, you’ve “made”$3000 that you’ve rebalanced to other stabler assets (minus fees, $~70). Not bad for riding a speculative bubble! # #6. Kind of rebalance-ish On the other hand, only getting$3000 out of a maximum of $10000 Bitcoin seems… not a good show. Sure, you were going to get only$0.001 if you were a HODLR, but that $10000 is a juicy number, and$3000 is an awful lot smaller.
Or consider the scenario in which you read Gwern in 2011 speculating
that Bitcoin could reach $10000 , and you were convinced that you should be long on Bitcoin. However, it was still possible that Bitcoin wouldn’t reach$10000, falling prey to some unforeseen problem before then. You would want to hedge, but rebalancing would throw away most of your gains before you got close to $10000. For example, if you started with$1000 @$1/BTC for a total of 1000 BTC, and you rebalanced at every doubling, you would end up with$13000 cash and ~$1000 BTC, compared to HODLing ending up with$10000000 in BTC. It’s a used car versus being the Pineapple Fund guy, I get it, it’s why HODLing is enticing.
The problem is that rebalancing doesn’t know anything about beliefs about long term outcomes, just about overall asset class volatility.
That said, if it’s possible to encode your beliefs as a probability distribution[11], you could run (appropriately named) Monte Carlo simulations of different selling strategies and see how they do, choosing a strategy that does well given what you expect the price of BTC to do.
I’ll work some simple examples, following some assumptions:
• we start from a current price of $10000/BTC. • we don’t care about the day-to-day price: if BTC reaches$20000, dips back to $15000, and then rises to$50000, we aren’t concerning ourselves with trying to time the dip, just with the notion that BTC went from $20000 to$50000.
• rebalancing is replaced with a hedge operation, where some fixed fraction of our Bitcoin stake is sold, at some fixed rising proportion of BTC. We’ll fix our sell point at every doubling (except for a sensitivity analysis step below).
• the transfer fees are set to be proportional to the price of BTC, at 0.5%: in practice, this just serves as a drag on the BTC-cash conversion. If you’re dealing with amounts much larger than 1 BTC (or SegWit works out), you might be able to amortize the transfer costs down to 0. To allow interpolating between both cases, we’ll simply give both 0.5% and 0% transaction drag simulations.
• the price of Bitcoin is modeled as rising to some maximum amount, and then crashing to basically nothing. This can also cover cases where BTC crashes and stays low for such a long time that it would have been better to put your assets elsewhere.
The processes of adapting the general principle to real life, consulting the economic/finance literature for vastly superior modeling methods, using more sophisticated selling strategies than selling a constant fraction, and not betting your shirt on black is left as an exercise to the reader.
So let’s say our beliefs are described by a mangled normal distribution[12]: we’re certain BTC will reach the starting price (obviously, we’re already there), around 68% less certain BTC will reach 1 standard deviation above the starting price, 95% less certain BTC will reach the 2nd standard deviation, so on and so forth. We’re not interested in a max BTC price below our starting price, so we’re just chopping the distribution in half and doubling the positive side.
Since we’ve centered the normal distribution on our starting price, we have only one other parameter to choose, our standard deviation (stdev). Some values are obviously bad: choosing a stdev of $1 means you are astronomically confident that BTC will never go above$10100. While you might not believe in the fundamentals behind Bitcoin, it is odd to be so confident that the crash is going to happen in such a specific range of prices. On the other hand, I don’t have a formal inference engine from which I can get a stdev value that best fits my beliefs, so I’ll be generous and choose a middling value of $10000. So if we run a number of simulations where the price of BTC follows the described normal distribution, we get: Several things become apparent right away: • there’s an obvious stepping effect happening[13]. Thinking about it, it’s obvious that each separate line is describing the effects of selling at each doubling. The lowest line only manages to sell once, the next line sells twice, and so on. • as one might expect, selling everything is low variance, and holding more is higher variance. As a reference point, the 0.5 sell fraction is just the previously described rebalancing strategy. • even when hitting 4 sell points, the transaction drag on 1 BTC isn’t too bad. • fitting a trend line with LOESS gets us a rough[14] measure of expected profit. In particular, we seem to top out at$20000 around a 0.5 sell fraction.
An obvious sensitivity analysis comes to mind: does the fact we’re selling only at every doubling matter? What if we sold more often? We can re-run the analysis when we sell at every 1.2x:
The stepping effect is still there, but less obvious: we hit more steps on the way to the crash price. The largest data points don’t go as high, but you can also see fewer zero values, since we pick up some selling points between $10000 and$20000. Additionally, the LOESS peaks at a lower sell fraction, which makes some sense: since we’re hitting more sell points, we can afford to hold on to more.
What if the normal distribution doesn’t describe our beliefs? Say we want more emphasis on the long term. Then our beliefs might be better modeled with the exponential distribution which is known to have a thicker tail than the normal.
If we use $10000 for the exponential distribution’s lambda parameter, then our simulations look like: The behavior isn’t too different, with the exception that some simulations start surviving to the 5th sell point. Additionally, the LOESS curves move to the left a bit compared to the normal, but only by a little: from eyeballing it, the peak might move from a sell fraction of 0.55 to 0.45. Again, there are more sophisticated analyses; for example, maybe you think that your probability distribution peaks around$100k/BTC and falls off to either side, in which case you would want a more complicated strategy to take advantage of your more complicated beliefs.
However, there’s a theoretical problem with our analyses thus far. The distributions we’ve been using are unbounded, allowing BTC prices that can theoretically go to infinity. Sure, we can treat economics as effectively unbounded: there sure are a lot of stars out there, and no economic activity has even left Earth orbit (Starmansome bacteria, and drawings of naked people notwithstanding). But that’s in the long run[15], and we only really care about BTC in the short term, when it’s generating “returns” in excess of normal market returns. For example, if BTC is wildly successful and becomes the world currency, it becomes hard to see how BTC can continue to grow in value far beyond the economic growth of the rest of the world[16]. So we might assume that once BTC eats the world, BTC just follows the bog standard economic growth of the world, and ceases to be interesting relative to all other assets[17].
However, this does mean we can add two assumptions: our distributions should be bounded, and there’s a chance the value of our held BTC doesn’t all disappear in the end. I’ll bound our distributions at the current stock market cap (as of 2018/03/06 $80 trillion, rounded to$100 trillion for ease of math)[18], and use a 2nd function (not a probability distribution!) to encode the probability that if BTC reaches a certain price, it will crash.
For the probability of reaching a price, I’ll keep using the exponential distribution, but bounded and re-normalized to add up to 1 within the bounds[19]. For the probability that BTC will crash, we don’t need a distribution: we could imagine a function that always returns 100% for a crash (as we were before), or 0, or any value between. Mathematically importantly, we’re not beholden to normalization concerns. I essentially free handed this function piecemeal with polynomials, with the goal of reflecting a belief that either BTC stabilizes as a small player in the financial markets, or becomes the world currency and not likely to lose value suddenly. Plotted on log axes:
When we run simulations (displayed on a log y-axis):
Up to now transaction drag hasn’t been such a big deal, but it suddenly shows up as a big deal: if we end up in a world where the price of BTC goes long and retains value, 0.5% drag appears to suddenly be super important, preventing us from getting close to the maximum $10000000 from our initial 1 BTC. It’s not too surprising, since more mundane investments need to also deal with fee[20] and tax drag. But if these beliefs are correct, do we do better on average? Not really, especially with transaction drag factored in. This holds true even when we zoom in on a linear axis[21][22]: I’ll end here. You could always make your models more complicated, but I’m making precisely$0 off this and that XCOM 2 isn’t going to play itself.
So after all this analysis, what do I recommend you do?
Trick question! I don’t recommend you do anything, because this post is not financial advice. If you persist in trying to take financial advice from someone who may frankly be a corgi, the world will laugh at you when BTC crashes to the floor and Dogecoin rises to take it’s place as the true master of cryptocurrencies. ALL HAIL THE SHIBA, WOOF WOOF.
[1] “But all those people are famous and invested in the status quo!” Okay, you got me, will linking to a non-super-rich acquaintance’s opinion on Bitcoin help?
To be even fairer, I could also come up with a similar list supporting Bitcoin instead, but I’m less interested in debating the merits of Bitcoin, and more interested in what you do once you wake up with a hangover and a wallet full of satoshis.
[2] I disagree with Scott when he says that we should have won bigger with Bitcoin. Most of the gnashing of teeth over Bitcoin is pure hindsight bias.
[3] Currently the only reason I would get any cryptocurrency is to use it as a distributed timestamping service.
[4] It’s not just breaking the base crypto layer: the nations of the world could decide to get real and criminalize Bitcoin. Law enforcement could get better at deanonymizing transactions, causing all the criminals to leave for something like Monero. Price stabilization just never happens, and people get sick of waiting for it to happen. Transaction fees spike whenever people actually try to use Bitcoin as a currency, or the Lightning Network turns out to have deep technical problems after a mighty effort to put it into place (deep problems in a widely deployed technology? That could never happen!). Ethereum gets its shit together and eats Bitcoin’s lunch with digital kittens. There’s the first mtgox-level hack since BTC started trading on actual exchanges. People decide they want to cash out of the tulip market en masse (although that might be unfair to the tulips).
[5] It’s unclear where you would get a Shah today, but exhuming all past Shahs is probably enough to piss people off.
[6] No, evading taxes/police actions is not a fundamental.
[7] Names munged to emphasize that they’re fantasy financial instruments.
[8] There’s something to be said about keeping a stable and liquid store like a savings account to make sure living expenses are covered for 6 months. You can replace the implied “all assets” with “all available assets” for a more non-toy policy.
[9] If the market simply dropped back to its previous position before you could rebalance, then you aren’t any worse off than you were 2 days ago, so maybe it wouldn’t be so disappointing to miss this opportunity. But that’s just anchoring, and Homo Economicus in your position would be super bummed.
[10] Normal investments have similar tax implications where you realize gains/losses at sale, covered by the general term tax drag.
[11] More on probabilities as states of beliefs, instead of simply reflecting experimental frequencies.
[12] Coming up with a better distribution is left as an exercise for the reader.
[13] A mild amount of jittering was added to make this visible with more simulation points.
[14] LOESS fits with squared loss, which emphasizes outliers, which you might not want. Additionally, LOESS is an ad hoc computational method (much like k-means) which won’t necessarily maximize anything; the main advantage is that it looks pretty if you choose the right spans to average over, and you don’t have to come up with a parametric model to fit to.
[15] And as they say, in the long run we’re all deadYes, we’re working on that.
[16] Sure, the bubble could continue, but bubbles pop at some point, and if it’s so damn important to the economy war isn’t out of the question, and if large scale nuclear war happens, more than just the price of Bitcoin is going to crash. “Here lies humanity: they committed suicide by hard math.”
Or a different perspective. Who would win?
• Billions of people that didn’t buy into Bitcoin, all frozen out of the brave new economy, backed by all the military might of nations that care about the sovereignty of their money supply.
• One chainy boi.
[17] There’s reasons to believe BTC might act otherwise:
• The fact that Bitcoin is deflationary, so it probably won’t act like a normal commodity in the limit if it eats the world. Even companies can issue more stock, or more gold found.
• The marginal Bitcoin might be way over priced forever.
[18] Interestingly, this implies that BTC only has around 1000x of hyper-growth headroom.
[19] The distribution chart is not properly normalized, since the distribution is actually linear without the log axis, but it simulates correctly.
[20] The movement to index funds seems partly rooted in avoiding high mutual fund fees.
[21] I’m not entirely sure what that hump in the ideal price is doing: it shows up in the other LOESS curves, and persists with changes in the random seed.
[22] We end up with a different maximum hump with the log and linear graphs: what’s going on here? Keep in mind that LOESS operates on minimizing squared error, and minimizing squared log error is a bit different than minimizing squared error.
Nothing that’s been said before, but it didn’t click until I thought about it some more and had an AHA! moment, so I’m doing my own write up.
Let’s say that you’re faced with a Newcomb problem[1].
The basic gist is this: Omega shows up, an entity that you know can predict your actions almost perfectly. Concretely, out of the last million times it has played out this scenario, it has been right 99.99% of the time[2]. Omega presents you with two boxes, of which box A contains $1000000 or nothing, and box B always contains$1000. You have only two choices, take just box A (one boxing) or take both box A and B (two boxing). The twist is that if Omega predicted you would two box, then A is empty, but if it predicted you would one box, then box A contains the $1000000. Causal decision theory (CDT) is a leading brand of decision theories that says you should two box[3]: once Omega presents you with the boxes, Omega has already made up its mind. In that case, there’s no direct causal relationship between your choice and the boxes having money in them, so the box A already has$1000000 or nothing in it. So, it’s always better to two box since you always end up with $1000 more than you would otherwise. People that follow CDT to two boxing claim that one boxing is irrational, and that Omega is specifically rewarding irrational people. To me it seems clear CDT was never meant to handle problems that include minds modeling minds: is it also irrational to show up in Grand Central station at noon in Schelling’s coordination problem, despite the lack of causal connection between your actions and the actions of your anonymous compatriot? So you might agree that CDT just doesn’t do well in this case[4] and decide to throw CDT out the window for this particular problem, netting ourselves an expected$999900.10 from one boxing[5], instead of the expected $1099.90 payout from two boxing. But let’s throw in a further twist: let’s say the boxes are transparent, and you can see how much money is inside, and you see$1000000 inside box A, in addition to the $1000 inside box B. Now do you two box? I previously thought “duh, of course”: you SEE the two boxes, both with money in them. Why wouldn’t you take both? A friend I respect told me that I was being crazy, but didn’t have time to explain, and I went away confused. Why would you still one box with an extra$1000 sitting in front of you?
(Feel free to think about the problem before continuing.)
The problem was that I was thinking too small: I was thinking about the worlds in which I had both boxes with money in them, but I wasn’t thinking about how often those worlds would happen. If Omega wants to maintain a 99.99% accuracy rate, it can’t just give anyone a box with $1000000. It has to be choosy, to look for people that will likely one box even when severely tempted. That is, if you two box in clear-box situations and you get presented with a clear box with$1000000 in it, congratulations, you’ve won the lottery. However, people like you simply aren’t chosen often (at a 0.01% rate), so in the transparent Newcomb world it is better to be the sort of person that will one box, even when tempted with arguably free money.
The clear-box formulation makes it even clearer how Newcomb’s problem relates to ethics.
“I’m looking for someone that will likely one box when given ample opportunity to two box, and literally be willing to leave money on the table.”
Now, let’s replace some words:
“I’m looking for a <study partner> that will likely <contribute to our understanding of the class material> when given ample opportunity to <coast on our efforts>.”
“I’m looking for <a startup co-founder> that will likely <help build a great business> when given ample opportunity to <exploit the business for personal gain>.”
“I’m looking for <a romantic partner> that will likely <be supportive> when given ample opportunity to <make asymmetric relationship demands>.”
In some ways these derived problems are wildly different: these (lowercase) omegas don’t choose correctly as often as 99.99% of the time, there’s an iterated aspect, both parties are playing simultaneously, and there’s reputation involved[6]. But the important decision theory core carries over, and moreover it generalizes past “be nice” into alien domains that include boxes with $1000000 in them, and still correctly decides to get the$1000000.
[1] I agree for most intents and purposes that the Parfit’s Hitchhiker formulation of the problem is strictly better because it lacks problems that commonly trip people up in Newcomb’s problem, like needing a weird Omega. However, then you get the clear-box problem right away, and I’m going for more incremental counter-intuitive-ness right now.
[2] Traditional Newcomb problem formulations started with a perfect predictor, but it becomes a major point that people get tripped up over because it’s so damn “unrealistic”. I’m sure no one would object to Omega never losing tic-tac-toe, but no one seems to want to accept a hypothetical entity that can run TEMPEST attacks on human brains and do inference really well. Whatever, it’s ultimately not important to the problem, so it’s somewhat better to place realistic bounds on Omega.
[3] Notably, Evidential Decision Theory says you should one box, but fails on other problems, and makes it a point to avoid getting news (which isn’t the worst policy when applied to most common news sources, but this applies to all information inflow).
[4] I haven’t really grokked it, but friends are excited about functional decision theory, which works around some of the problems with CDT and EDT.
[5] It’s not exactly $1000000, since Omega isn’t omniscient and only has 99.99% accuracy, so we have to take the average of the outcomes weighted by their probability to get the overall expected outcome ($1000000 * 0.9999 + $1000 * 0.0001 =$999900.10).
[6] Notably, it starts to bear some resemblance to the iterated prisoner’s dilemma.
# Tape is HOW expensive?
Maybe you've seen that hard drive prices aren't falling so quickly. Maybe you've seen the articles making claims like "tape offers $0.0089/GB!"[1], looked at recent hard drive prices, and seriously thought about finally fulfilling the old backup adage "have at least 3 backups, at least one of which is offsite" with some nice old-school tape[2]. So you'd open up a browser to start researching, and then close it right afterwards in horror: tape drives prices have HOW many digits? 4? The prices aren't even just edging over$1000, it's usually solidly into the $2000s, or higher. Maybe then you start thinking about just forking all your money to The Cloud™ to keep your data. But maybe it's worth taking a look and seeing exactly how the numbers work out. As an extreme example, if you can buy a$2000 device that gives you infinite storage, then that is a really interesting proposition[3]. Of course, the media costs for tape aren't zero, but they are cheaper than the equivalent capacity in hard drives. Focusing in, the question becomes: when does the lower cost of each additional tape storage overcome the fixed costs of tape, such that tape systems become competitive with hard drives?
Some background: tape formats are defined by the Linear Tape-Open Consortium (LTO)[4], which periodically defines bigger and better interoperable tape formats, helpfully labled as LTO-N. Each jump in level roughly corresponds to a doubling of capacity, such that LTO-3 contains 400GB/tape while the recent LTO-8 contains 12TB/tape.
And some points of clarification:
• LTO tapes usually have two capacity numbers; for example, LTO-3 tapes usually advertise themselves as being able to contain 400 or 800GB. If you're lucky, the advertising material will suffix "(compressed)" sotto voce, notifying you that the 800GB number is inflated by some LTO blessed pie-in-the-sky compression factor. Ignore this, just look at the LTO level numbers and their uncompressed capacity.
• We usually talk about hard drives as a single unit (if you can see the individual hard drive platters, that means you are having a bad problem and you will not be storing data on that drive today), but tape is more closely related to the floppy/CD drives of yore, where media is freely exchangable between drives.
First, I gathered some hard numbers on cost. I trawled Newegg and Amazon for drives and media for each LTO level from 3 to 8, grabbing prices for the first 3 drives from each source and 5 media from each. Sometimes this wasn't possible, like for LTO-8: it's recent, and I could only find 2 different drives. I restricted myself to a handful of pricing examples because I didn't want to gather data endlessly (there are a lot of people selling LTO tapes), but I didn't want to have to sift through a startling lack of data about whether unusually low/high prices were legitimate offers, or indications something was wrong with the seller/device. Whatever, I just got enough data to average it out[5].
Second, I took the average media cost for an LTO level, and how much uncompressed data that level could store, and figured the cost per TB. It's true that some of the later LTO levels should look a lot more discretized: for example, storing 5 and 10 TB on a LTO-8 tape (which can store 12TB) will cost exactly the same, while you'll need to get around twice as many LTO-3 tapes. However, just making everything linear makes analysis a lot easier, and will give approximately correct answers. If it turns out that tape becomes competitive at some small media storage multiple then we can re-run the numbers.
Then, it's just a matter of solving a couple of linear equations, one representing the tape fixed and variable costs, and the other the hard drive costs. To capture some variability in the hard drive cost, I compared the tapes against both a hypothetical cheap $100/4TB drive and a$140/4TB drive[6].
Cost_{Tape} = TapeMedia/TB \cdot Storage + TapeDrive
Cost_{HD} = HD/TB \cdot Storage
Finding the storage point where the costs become equal to each other:
Storage_{competitive} = \frac{TapeDrive}{HD/TB - TapeMedia/TB}
When we solve with some actual data (Google Sheets), we get the smallest competitive capacity going to LTO-5 (1.5TB/tape). And yet, it doesn't look good: if we're comparing against expensive hard drives, we need to be storing ~100TB to become competitive, and if we're comparing against cheap hard drives, we need ~190TB to break even.
So I did some more sensitivity analysis: right now, drives and media are expensive for the recent LTO-7 and 8 standards. Will our conclusions change when LTO-7/8 equipment drop to current LTO-5 prices? Comparing to expensive drives the minimum competitive capacity drops to ~65TB, but that's assuming no further HD R&D, and is still way above the amount of data I will want to store in the near future[7].
In retrospect, it should have been more obvious than I was thinking that the huge fixed costs of tape drives along with non-minuscule variable costs just doesn't make sense for any data installation that doesn't handle Web Scale™ data.
And that's not even fully considering all the weird hurdles tape has:
• It's unclear whether there are RAID-like tape appropriate filesystems/data structures, especially when you don't have N drives that you can write to at the same time. You can read stories about wrestling with tape RAID, but it doesn't seem to be a feature of the standard Linear Tape File System.
• Tied into with the previous point, you'll need to swap tapes once one of them fills up. Or if you're trying to get media redundancy, you'll need to do a media swapping dance every time you want to backup. Needing to manage backup media isn't really great when you're trying to make backups so easy they're fire-and-forget.
• Tape drives are super expensive, which makes them a giant single point of failure. Having redundant drives means you need even more tons of data to stay competitive with normal hard drives.
So we've arrived at the same conclusion as our gut: tapes are overdetermined to be a bad idea for the common consumer. If you can get really cheap clearance/fire sale drives, it might become worth it, but keep in mind the other concerns listed above.
[1] Which initially doesn't sound very impressive, given Backblaze's B2 offers $0.005/GB. However, that's an ongoing monthly cost: two months is enough to put tape back into the game, at least according to the linked Forbes article. (I've also remembered more impressive numbers in other articles, but maybe that's just my memory playing tricks on me.) [2] Tape has nice properties beyond just having a lower incremental storage cost. It's offline as opposed to constantly online: once you have access to a hard drive, you can quickly overwrite any part of it. Since it isn't possible to reach tapes that aren't physically in the drive, it becomes much more difficult to destroy all your data (say, in a ransomware attack). Tapes are possibly more stable in terms of shelf life, and you can theoretically write to it faster than hard drives. [3] If nothing else, owning as many universe breaking/munchkin approved pieces of technology seems like a good policy. [4] Sure, you can use VCRs for storage with ArVid, but it is not competitive at all at 2GB on 2 hour tapes. It could probably be made to work better since it uses only 2 luminance levels instead of a full 256+ gradations, but the graniness of home videos doesn't give me hope for much better resolution. Plus, you can do all that extra work, but you'll only end up with capacity comparable to current Blu-Rays. And, where are you going to find a bunch of VCR tapes these days? [5] Taking the median is probably better for outlier rejection, and taking the minimum price in each category would probably be a good sensitivity analysis step. I don't believe either choice drastically changes the output for me, since I have relatively small amounts of data to store, but you might want to run the numbers yourself if you have more than, say, 20TB to store. [6] It's true that there will likely be some additional hardware costs to actually access more than 12 hard drives, but if nothing else you could go the storage pod route and get 60 drives to a single computer, so we'll just handwave away the extra costs. [7] Honestly, I'm not even breaking 1TB at the moment. # 2017 Review If there’s a theme for my 2017, it seems to be FAILURE. # FAILURE at cultivating habits • Due to the addition of a morning standup at work, I noticed I was getting in much later than I thought. I could previously pass off some pretty egregious arrival times as “a one time thing” to myself, but not when a hard deadline made it clear that this was happening multiple times a week. So I tried harder to get in earlier, and this made basically no impact. • I noticed I was spending a lot of time watching video game streaming; Twitch streams are long, regularly 4 hours, which would just vaporize an evening or an afternoon. It’s not so much that it was a ton of total time, but it was basically a monolithic chunk of time that wasn’t amenable to being split up to allow something to get done each day. I love you beaglerush, but the streams are just too damn long, so I decided I should stop watching game streams. However, I just felt tired and amenable to bending the rules at approximately the same rate as before, so my behavior didn’t really change. • I’m a night owl, to the extent that going to sleep between midnight and 3 AM probably covers 95% of my sleeping times, and the rest is heavily skewed towards after 3 AM. So I started tracking when I went to sleep, and had some friends apply social demerits when I went to sleep late. I got mildly better, but was still all over the place with my sleep schedule. There’s a happy-ish ending for these habits, but first… # FAILURE at meeting goals A year ago, I decided to have some resolutions. However, I didn’t want them to be year-long resolutions: a year is a long fucking time, and I knew I’d be pretty susceptible to… • falling off the wagon and then not getting back on, burning the rest of the year, or • mis-estimating how big a year-sized task would be, which would probably only become apparent near the middle of the year. If I got it really wrong, it would be months before I could properly try again. So similarly to my newsletter tiers, I decided to break the year into fifths (quinters?), and resolved to do something for each of those. I hoped it would be long enough to actually get something done, while being short enough that I could iterate quickly. So, how did I do? ## Quinter 1 • FAILURE. Finish the hardware part of project noisEE. Design turned out to be hard, did a design Hail Mary that required parts that didn’t get here before the end of the quinter. • Stretch FAILURE. Read all of Jaynes’ Probability Theory. Got only ~40% of the way through: it turns out trying to replicate all the proofs in a textbook is pretty hard. • FAILURE. Try to do more city exploratory activities. Planning and executing fun/interesting activities was more time consuming than anticipated, and I didn’t account for how much homebody inertia I harbored and how time consuming the other goals would be. • SUCCESS. Keep up all pre-existing habits. ## Quinter 2 • FAILURE. Finish project noisEE. It turns out the Hail Mary design was broken, who could have guessed? • SUCCESS (mostly). Make a NAS (network attached storage) box. Technically, the wrap up happened the day after the quinter ended. • SUCCESS. Keep up all pre-existing habits. Apparently attaining this goal isn’t a problem, so I stopped keeping track of this in future quinters. ## Quinter 3 • SUCCESS/FAILURE. Finish project noisEE, or know what went wrong while finishing. There was a problem with the 2nd Hail Mary, which I debugged and figured out, but it was expensive to fix, so I didn’t stretch to actually fix it. However, the next quinter I didn’t respect the timebox, which was the entire point of this timebox[1]. • FAILURE. Make a feedback widget for meetups. After designing it, I discovered I didn’t want to spend the money to fabricate the feasible “worse is better” solution. • SUCCESS. Spend 20 hours on learning how to Live Forever. Spent 30+ hours on research. ## Quinter 4 It’s about this time that I start enforcing goal ordering: instead of doing the easiest/most fun thing first, I would try to finish goals in order, so large and time consuming tasks don’t get pushed to the end of the quinter. • SUCCESS. Finish ingesting in-progress Live Forever research. Just wanted to make sure momentum continued from the previous quinter so I would actually finish covering all the main points I discovered I wanted to include. • SUCCESS (sad). Fix project noisEE, or give up after 4 hours. I gave up after 4 hours, after trying out some hacks. • SUCCESS. Write up noisEE project notes. Surprisingly, I had things to say despite not actually finishing the project, making the notes into a mistakes post. • FAILURE. Write up feedback widget design for others to follow. For some reason, I ignored my reluctance to actually build the thing and assumed I would value writing out potentially building the thing instead. Talk about a total loss of motivation. • SUCCESS. Write up the Live Forever research results, post about them. Includes practicing presenting the results a number of times. • Stretch FAILURE. Prep the meta-analysis checklist. Didn’t have time or the necessary knowledge. ## Quinter 5 At this point, I’m starting to feel stretched out, so I started building in break times into my goal structure. • SUCCESS. Prepare to present the Live Forever research. Was probably too conservative here, I also planned to actually present, and there weren’t foreseeable things that would have prevented it from happening. • FAILURE. Take a week off project/goal work. I thought I would have only 1 week to prepare to present, but it turned into 2-3 weeks and broke up this break week, which was not nearly as satisfying. • SUCCESS. Redesign the U2F Zero to be more hobbyist friendly[2]. • SUCCESS. Do regular Cloud™ backups[3][4]. • SUCCESS. Take 1 week off at the end of the year. That’s when I’m writing this post! # Miscellaneous FAILURE There’s so much FAILURE, I need a miscellaneous category. Speaking of categories, I was organizing a category theory reading group for the first third of 2017 based on Bartosz’s lectures, but eventually the abstractions on abstractions got to be too much[5] and everything else in life piled on, and we ended up doing only sporadic meetups where we sat around confused before I decided to kill the project. In the end, we FAILED to reach functional programming enlightenment. I’ve even started to FAIL at digesting lactose. It’s super sad, because I love cheese. Why was there so much FAILURE this year? Part of it is that I had more things to FAIL at. For example, I wouldn’t previously keep track of how I was doing at my habits, and color code them so I could just look at my tracker and say “huh, there’s more red than usual”. Or, I wouldn’t previously have the data to say “huh, I went to sleep after 3AM 2 weeks in a row”[6]. And in a way, I eventually succeeded: for each of the habits I listed earlier, I applied the club of Beeminder and hit myself until I started Doing The Thing. Does my reliance on an extrinsic tool like Beeminder constitute a moral failing? Maybe, but the end results are exactly what I wanted: • I got super motivated to build up a safety buffer to get into work early (even before getting my sleep schedule together!), • only broke Twitch abstinence twice since starting in May[7], • immediately went from an average sleeping time of 2AM to almost exactly 12:29[8]. And for goals, I opened myself up to FAILURE by actually making fine-grained goals, which meant estimating what I could do, and tracking whether I actually did them. In a way, there are two ways to FAIL: I could overestimate my abilities, or I could simply make mistakes and FAIL to finish what I otherwise would have been able to do. In practice, it seems like I tended to FAIL by overestimating myself. It’s pretty obvious in retrospect: I started out by FAILING at everything, and then started cutting down my expectations and biting off smaller and smaller chunks until I actually hit my goals. Maybe I should have built up instead of cutting down, but I wanted to feel badass, and apparently the only way you can do that is by jumping in the deep end, so FAILING over and over it is. On the other hand, I think I just got lucky that I stuck it out until I got it together and started hitting my targets, so if you can do it by building upwards, that might work better. # Takeaways So going forward what are the things I’d keep in mind when trying to hit goals? • Think through more details when planning. Saying “I will do all the proofs in Probability Theory” is fine and good, but there’s only so much time, and if you haven’t worked even one of the proofs, then it’s not a goal, it’s a hope and a prayer. Get some Fermi estimates in there, think about how long things will take and what could go wrong (looking at you, hardware turn-around times[9]). • If you’ve never done a similar thing before, then estimating the effort to hit a certain goal is going to be wildly uncertain. Pare the goal way down, because there are probably failure modes you’re not even aware of. For example, “lose 5 pounds” would be a good goal for me, because I’ve fiddled with the relevant knobs before and have an idea about what works. “Make a coat from scratch” is a black box to me, hence not a good goal. Instead, I might instead aim for “find all the tough parts of making a coat from scratch”, which is more specific, more amenable to different approaches, and doesn’t set up the expectation of some end product that is actually usable[10]. • Relatedly, 10 weeks (about the length of a quinter) is not a leisurely long time. Things need to be small enough to actually fit, preferably small enough to do in a sprint near the end of the quinter. I know crunch time is a bad habit carried over from my academic years, but old habits die hard, and at least the things get done. • Build in some rest. I pulled some ludicrous hours in the beginning of the year, and noticed as time went on that I seemed less able to put in a solid 16 hours of math-working on the weekends. My current best guess is that I haven’t been taking off enough time from trying to Do The Thing, so I’m building in some break times. • Don’t throw away time. You’ll notice that I kept the noisEE dream alive for 4 quinters, each time trying tweaks and hacks to make it work. It’s clear now that this is a classic example of the sunk cost fallacy, and that I either should have spent more time at the beginning doing it right, or just letting it go at that point. Another way to throw away time is to try and do things you don’t want to do. My example is trying to make/post the feedback widget, which is pretty simple, but I discovered I couldn’t give any shits about it after the design phase. This isn’t great, because I said I wanted to do the thing, and not doing the thing means you’re breaking the habit of doing the things you’ve set out to do (from Superhuman by Habit). Unfortunately, I’m still not sure how to distinguish when you really want to do something versus when an easily overridden part of yourself thinks it’s virtuous to want to do something, which is much less motivating. • Goal hacks might be useful. Looking at it, the main hack I used was timeboxes, which worked sometimes (total longevity research was within a 2x order of magnitude of my timebox estimate) and not so well in others (noisEE overflowed). It seems to be most useful when I’m uncertain how much actual work needs to be done to achieve some goal, but I still want to make sure work happens on it. After working on it for some number of hours, it should be clearer how sizable the task is and it can get a more concrete milestone in the next round. Stretch goals might also work, but making things stretch goals seems like a symptom of unwanted uncertainty, and tend to be sized such that they never actually get hit. Unless I find myself stagnating, I plan on just dropping stretch goals as a tool. • If you’re not doing the thing because of something low-level like procrastination, a bigger stick to beat yourself with might help. Beeminder is my stick of choice, with the caveat that you need to be able to be honest with yourself, and excessive failure might just make you sad, instead of productive. (As a counterpoint, you might be interested in non-coercive ways to motivate yourself, in which case you might check out Malcolm Ocean’s blog.) Despite all the FAILURE, I think agree with the sentiment of Ray’s post: over the past few years, I’ve started getting my shit together, building the ability to do things that are more complicated than a single-weekend project and the agency to pursue them. That said, most of the things I finished this year are somewhat ancillary, laying the groundwork for future projects and figuring out what systems work for me. Now that I’ve finished a year testing those systems and have some experience using them, maybe next year I can go faster, better, stronger. Not harder, though, that’s how you burn out. Well, here’s to 2018: maybe the stage I set this year will have a proper play in the next. [1] Thinking about it, timeboxes fall into two uses. You either want to make a daunting task more tractable, so you commit to only doing a small timebox, and if you want to keep going then that’s great! However, the other timebox is used to make sure that some task that would otherwise grow without bound stays bounded. I intended for the noisEE timebox to be used in the bound fashion, so when I kept deciding to keep working on it, that meant the timebox was broken. [2] This project does not have a post yet, and may never have one. Hold your horses. [3] Offsite backups are an important part of your digital hygiene, and the Butt is the perfect place to put your them. [4] If people really want it, I can post about my backup set up. [5] Don’t worry, it’s easy, an arrow is like a functor is like an abstract transformation! [6] Knowledge is power, France is bacon. [7] One of these wasn’t Twitch at all, but a gaming stream I accidentally stumbled across on YouTube, but that still counts. [8] HMM I WONDER WHEN I SET MY SLEEP DEADLINE. [9] Unless you’re willing to pay out the nose, getting boards on a slow boat from China takes a while. [10] The tradeoff is that the 2nd goal is more nebulous: how do you know that you’ve found all the tough parts of making a coat? Maybe timeboxes would help in this case. # Ain’t No Calvary Coming Epistemic status[1]: preaching, basically. An apology, in both senses[2]. I know my mom reads my blog; hi, mom. Mothers being mothers, I figure I owe her a sit-down answer to why I’m not Christian, and don’t expect to re-become Christian[3]. Now, I don’t expect to convince anyone, but maybe you, dear reader, will simply better understand. Let’s start at the end. Let’s start with the agony of hell, and the bliss of heaven. Sure, humans don’t understand infinities, don’t grasp the eye-watering vastness of forever nor the weight of a maximally good/bad time. Nevertheless, young me had an active imagination, so getting people out of the hell column and into the heaven column was obviously the most important thing, which made it surprising that my unbeliever friends were so unconcerned with the whole deal. I supposed that they already had a motivated answer in place: as heathens, they would be wallowing in unrepentant hedonism, and would go to great lengths to make sure they kept seeing a world free of a demanding and righteous God. I knew the usual way to evangelize, but it depended to a frustrating degree on the person being evangelized to. It seemed unacceptable that some of my friends might go to hell just because their hearts were never in the right place. Well, what if I found a truly universal argument for my truly universal religion? The Lord surely wouldn’t begrudge guidance in my quest to find the unmistakable fingerprints of God (which were everywhere, so the exercise should be a cakewalk), and I would craft a marvelous set of arguments to save everyone. Early on, I realized that the arguments I found persuasive wouldn’t be persuasive to the people I wanted to reach: if you assumed the Bible was a historical text you would end up saying “no way, Jesus did all these miracles, that’s amazing!”, but what if you didn’t trust the Bible? I would need to step outside of the assumption that God existed, and then see the way back. Was this dangerous to my faith? Well, I would never really leave: I would just be empathetic and step into my friend’s shoes, to better know how to guide them into the light. And you remember the story about walking with Jesus on the beach? There was no way this could go wrong! Looking back, I see that my thoughts were self-serving. As a product of both faith and science, I wanted to make it clear that religion could meet science on its own terms and win. If the hierarchy of authority didn’t subordinate science to religion, then…? So I studied apologetics[4], particularly Genesis apologetics. I made myself familiar with the things like young vs. old earth creationism, the tornado-in-a-junk-yard equivocation, attacks I could make on gradual and punctuated equilibrium[5]. I was even dazzled by canopy theory, where a high-altitude aerial ocean wrapped the planet, providing waters for The Flood and allowing really long lifespans by blocking harmful solar radiation[6]. I went on missions, raising money and overcoming my natural reticence to talk to people about the Good Word. I even listened almost solely to Christian rock music. Now, I don’t doubt I believed: I felt the divine in retreats and mission trips, me and my brothers and sisters in Christ singing as one[7]. I prayed for guidance, hung on the words of holy scripture, found the words for leading a group prayer, and eventually confirmed my faith. As part of my confirmation, I remember being baptized for the 2nd time in high school[8]: a clear, lazy river had cut a gorge into sandstone, and the sunset lit the gorge with a warm glow. Moments before I went under the water, I thought “of course. How could I doubt with such beauty in front of me?”. But some of these experiences also sowed the seeds of doubt. Someone asked if I wanted the blessing of tongues: I said yes, thinking a divine gift of speaking more than halting Spanish would be great for my upcoming mission trip. And, how cool would it be to have a real world miracle happen right in front of me‽ Later I tried to figure out if glossolalia was in fact the tongue of angels[9], but I didn’t come up with anything certain, which was worrying. Why were my local leaders enthusiastic about this “gift of tongues”[10], but other religious authorities were against the practice? On a mission trip I told someone I could stay on missions indefinitely (in classic high school fashion, I had read the word “indefinite” a few times and thought it sounded cool) and was brought up short when they responded with skepticism that someone could stay forever; why wouldn’t they stay if the work was righteous, comfortable living be damned? Or I would think about going to seminary instead of college, and wonder if that was God’s plan for me. How did I know what was right, what was true? The thing is that I didn’t even begin to know. On my quest for answers, I didn’t comprehend the sheer magnitude of 2000 years of religious commentary[11]. I didn’t grasp how hairy the family tree of Christian sects was, each with their own tweaks on salvation. I read Mere Christianity and a few books on apologetics, and thought it would be enough. I didn’t even understand my enemy at all, refusing to grapple with something so basically wrong as The Selfish Gene. Into this void on my map of knowledge I sailed a theological Columbus, expecting dragons where there was a whole continent of thought. So the more I learned, the more doubt compounded. When my church split, I wondered why such a thing could happen: were some of the people simply wrong about a theological question? That raised more disturbing questions about how one could choose the truest sect of Protestant Christianity, ignoring “cults” like Mormonism or Catholicism or Eastern Orthodox or even other religions entirely, like Islam (and there are non-Abrahamic religions, too‽). Or, maybe a church split could happen for purely practical concerns, but it was disturbing that such an important event in a theological institution wasn’t grounded in theological conflict: if not a church split, then what should be determined by theology?[12] And, I realized other religions had followers with similarly intense experiences: what set mine apart from theirs? Again, what did I know, and how did I know it? Don’t worry, my spiritual leaders would say. God(ot) is coming, just wait here by this tree and he’ll be along any moment now[13]. And maybe God would come, but he would maintain plausible deniability, an undercover agent in his own church. Faith healings wouldn’t do something so visible as give back an arm, just chase away the back pain of a youth leader for a while. My church yelled prayers over a girl with a genetic defect, and the only outcome was frightening her[14]. Demonic possession leading to supernatural acts isn’t a recorded phenomenon, despite the proliferation of cameras everywhere. So the whispers of godhood would always scurry behind the veil of faith whenever a light of inquiry shone on it. I started refusing to stand during praise. Singing with this pit of questions in my stomach seemed too much like betrayal, displaying to the world smiles and melodies I knew were empty. I sat and thought instead, trying to retrace Kant’s Critique of Pure Reason without Kant’s talent[15]. I simply couldn’t accept the dearth of convincing evidence and simply trust, when all my instincts and training screamed for a sure foundation, when I knew a cosmic math teacher would circle my answer of “yes, God exists!” and scribble in red “please show your work“. I told myself I would end it in a blaze of glory, pledging fealty to a worthy Lord, or flinging obscenities at the sky and pulpit when they didn’t have the answers. Instead, my search for god outside of god himself petered out under a pile of unanswered questions[16], and I languished in a purgatory of uncertainty. In a way, I was mourning the death of god. It took years, but now I confidently say I’m an atheist. So that was the past. What about the future? Sometimes the prodigal son falls on hard times and has to come home; in the case of the church, home has a number of benefits. Peace of mind that everything will turn out okay. A sabbath, if one decides to keep it. A set of meditation-like practices at regular intervals (even in Christianity!). A set of high-trust social circles[17] with capped vitriol (in theory; in practice, see the Protestant Reformation and aforementioned church splits), a supportive community with a professional leader, a time to all feel together. Higher levels of conscientiousnessHigher productivity[18]. The ability to attract additional votes in Congressional races. Chips at the table of Pascal’s Wager[19]. Perhaps most importantly, though, is a sense of hope. How does one have hope for the future when there is only annihilation at the end? Paul saw the end, a world descending into decadence, a world that couldn’t save itself: hell, given a map, it wouldn’t save itself. Contrary to this apocalyptic vision, scientism[20]/liberalism preaches abundance, the continual development of an ever better world. We took the limits of man and sundered them; we walked on the moon, we eradicated polio, we tricked rocks into thinking for us, and we’ll break more limits before we’re done. Paul was the product of an endless cycle of empires; we’re on a trajectory to leave the solar system[21]. There is light in the world[22], and it is us. But if the world is simply getting better, then does it matter what I believe? Well, our rise is only part of the story: it took tremendous work to get from where we were to where we are, and the current world is built on the blood of our mistakes[23]. The double-edged sword of technology could easily lop off our hand if we’re not careful. We’ve done some terrible things already, and finding the Great Leap Forward-scale mistakes with our face is hideously expensive. So progress is possible, but we haven’t won. How do the engineers say it? “Hope is not a strategy.” There ain’t no Calvary coming[24], ain’t no Good King to save us, ain’t no cosmic liquidation of the global consciousness, ain’t no millennium expiration date on suffering. A reductionist scientific world is a cold world without guardrails, with nothing to stop us from destroying ourselves[25]: if we want a happy ending, we’ll need to breach Heaven ourselves, and bowing our heads and closing our eyes in prayer won’t help when we should be watching the road ahead. It’s going to be a lot of hard work, but this isn’t a cause for despair. This is a call to arms. So in the past, a successful prodigal son may have gone home for a sense of continuity and purpose, a sense of hope beyond the grave. However, now he doesn’t have to. It’s not just about unrepentant hedonism[26]: we’re getting closer to audacious goals like ending poverty, ending aging, ending death. We won’t wait for a bright new afterlife that isn’t coming: we humanists will do our best, and maybe, just maybe, it will be enough. No heaven above, no hell below, just us. Let us begin. [1] Epistemics: the ability to know things. Epistemic status: how confident I am about the thing I am writing about. [2] Senses: saying sorry, and in the sense of apologetics or defending a position. Commonly found as the bi-gram “Christian apologetics”. [3] I almost didn’t publish this post, figuring I hadn’t heard from my mom about faith-related topics in a while. Then my mom told all my relatives “We are praying for a godly young woman who can bring <thenoviceoof> back to us”, so here we are. [4] A defense of the faith, basically, usually hanging around as a bi-gram like “Christian apologetics”. See Wikipedia. [5] Standing from where I am, I can see how the books would paint the strengths of science as weakness: “look at how science has been wrong! And then it changed it’s mind, like a shifty con-man!” In this respect, the flip-flopping nature of science journalism in fields like nutrition is Not Helping, a way of poisoning the well of confident proclamations of evidence, such that everyone defaults to throwing up their hands in the face of evidence, instead of actually assessing it. [6] In retrospect, I had a thing for weirdly implausible theories: I remember being smitten with the idea that all of physics could be explained by continually expanding subatomic particles, a sort of classical Theory of Everything that no one asked for, with at least one gaping hole you could drive trucks through (hint: how do satellites work?). [7] We even cautioned ourselves against “spiritual highs”. We would feel something, but the something wouldn’t always be there, which maybe should have tipped me off about something fishy happening. How do they say it, “don’t get high off your own supply”? [8] Many children are baptized soon after birth, and confirmed at some later age when they can actually make decisions. Hmm. [9] Now, I know that I could tell by listening for European capitals. [10] I didn’t actually get to the point of spewing glossolalia: I could hear my youth group leader’s disappointment that I didn’t quite let myself go while repeating “Jesus, I love you” faster than I could speak. And, finding out that no earthly audience would have understood what I was saying was also a shock, like finding out God solely communicated to people through grilled cheeses. [11] Talk about being bad at grasping infinities: I couldn’t even grasp 2000 years. “More things than are dreamt of in your philosophy”, etc. [12] The obvious rejoinder is that the church is still an earthly institution, and it’s still subject to mundane concerns like balancing the budget: for every Protestant Reformation grounded in theological conflict, there’s another hundred grounded in conflicts over the size of the choir, all because we live in a fallen world. The general counter-principle is that if there’s no way to tell from the behavior of churches whether we’re in a godly or godless world, then the fact there exists a church ceases to count as evidence. [13] The fact that some biblical scholars translate “cross” as “tree” makes me suspicious that Waiting for Godot was in fact making this exact reference. [14] I didn’t partake; this was after I started being weirded out by the charismatics. [15] I’m disappointed I didn’t throw up my hands at some point and yell “I Kant do it!” [16] Sure, there were answers, but they weren’t satisfying. You couldn’t get there from here. [17] Of course, the trust comes at a price; I wouldn’t want to be trans in a small tight-knit fundamentalist town. [18] It’s not clear from the abstract of the paper, but in Age of Em Robin Hanson cites this paper as showing the religious have higher productivity. [19] Mostly not serious, since I would expect a jealous Abrahamic God to throw out any spiritual bookies. Also keep in mind that Pascal’s wager falls apart even with the simple addition of multiple gods competing for faith. [20] I am totally aware that scientism is normally derogatory. However, science itself doesn’t require the modes of thought that we normally attribute to our current scientific culture. [21] One might worry that we would simply export our age-old conflicts and flaws to the stars, in which case they might become… bear with me… the Sins of a Solar Empire? [22] “Run for the mountains!” said Apostle Paul. “It is the dawn of the morning Son!” Then Oppenheimer said “someone said they were looking for a dawn?” [23] Sapiens notes “Haber won the Nobel prize in chemistry. Not Peace.” [24] I’m sorry-not-sorry about the pun. If you don’t get it, Calvary is the hill Jesus supposedly died on, and “ain’t no cavalry coming” is a military saying: there’s no backup riding in to save the day. [25] Nukes are traditional, if less concerning these days. Pandemics are flirting on the edge of global consciousness, AI getting more serious, and meta-things like throwing away our values and producing a “Disneyland without children” are becoming more concerning. [26] Just look at what the effective altruists are doing with their 10%. # The Mundane Science of Living Forever Epistemic Status: timeboxed research, treat as a stepping stone to more comprehensive beliefs. Known uncertainty called out. Live forever, or die trying! TLDR? # Yes, Immortality I wrestled with whether to shoot for a more normal and mundane title, like “In Pursuit of longevity”, but “live a long time!” just doesn’t have the ring that “live forever!” does. Clarification: I don’t have the Fountain of Youth. I’m relying on the future to do the heavy lifting. Kurzweil’s escape velocity idea is the key idea: we want to live long enough that life expectancy starts increasing more than 1 year per year. Life expectancy is currently stagnant, so we want to live as long as possible to maximize our chances of hitting some sort of transition. In other words, we need silver bullets to overcome the Gompertz curve, but there are no silver bullets yet, just boring old lead bullets. We’ll have to make our own silver bullet factory, and use the lead bullets to get there. So, the bulk of this post will be devoted to simply living healthily. A lot of the advice is boring and standard: eat your vegetables, exercise, get enough sleep. However, I wanted to check out the science and see what holds up under (admittedly amateur) scrutiny. (I’ll be ignoring the painfully obvious things, like not smoking. If you’re smoking, stop smoking[1].) My process: I timeboxed myself to 20 hours of research, ending in August 2017. First, I looked up the common causes of death and free-form generated possible interventions. Then, I followed the citations in the Lifestyle interventions to increase longevity post and then searched Google Scholar, especially for meta-analyses, and read the studies, evaluating them in a non-rigorous way. I discarded interventions that I wasn’t certain about: for example, Sarah lists some promising drugs and gene therapies but based only on animal studies, where I wanted more certainty. I ended up using 30+ hours, so not everything is exhaustively researched as much as I would like: for example, there was a fair amount of abstract skimming. I did not read every paper I reference end-to-end. On the other hand, many papers were also locked behind paywalls so I couldn’t do much more than that. This means if you read one of these results and implement it without talking to your doctor about it and bad things happen to you, I will ask you: ARE YOU A SPRING LAMB? WHY THE FUCK ARE YOU DOING THINGS A RANDOM PERSON ON THE INTERNET TOLD YOU TO DO? AND WITHOUT VETTING THOSE THINGS? Or more concretely: you are a unique butterfly, and no one cares except the medical world. What happens for the faceless statistical masses might not happen for you. I will not cover every single possible interaction and caveat, because that is what those huge medical diagnosis books are for, and I don’t have the knowledge to tell you about the contents of those books. Don’t hurt yourself, ask your doctor. # An example: blood donation First, I wanted to lead with an example of how the wrong methods can cripple a conclusion and end up with bad results. Now, blood donation looks like it is very, very good for male health outcomes. From “Blood donation and blood donor mortality after adjustment for a healthy donor effect.” with 1,182,495 participants (N=1,182,495) published in 2015 (note it’s just an abstract, but the abstract has the data we want): » For each additional annual blood donation, the all-cause mortality RR (relative risk) is 0.925, with a 95% CI (confidence interval) from 0.906 to 0.943[2]. I’ll be summarizing this information as RR = 0.925[0.906, 0.943] throughout the post. (Unless otherwise stated, in this post an RR measure will refer to all-cause mortality, and X[Y, Z] CI reports will be values followed by 95% confidence intervals. There will also be references to OR (odds ratio) and HR (hazard ratio)). There’s even a well fleshed out mechanism, where iron ends up oxidizing parts of the cardiovascular system and damaging it, and hence doing regular blood donation removes excess blood iron. But there are some possible confounders: • blood donation carries some of the most stringent health screens most people face, which results in a healthy donor effect, • altruism could be correlated with conscientiousness, which might affect health outcomes. The study cited earlier is observational: they’re looking at existing data gathered in the course of normal donation and studying it to see if there’s an effect. In order to make a blanket recommendation that men should donate blood at some regular interval, what we really want is to isolate the effect of donation by putting people through the normal intake and screening process, and then right before putting the needle in randomize for actually taking the donation or not, or even stick the needle in and not actually draw blood. (Note that randomization is not strictly better than observational studies: observations can provide insights that randomization would miss[3], and a rigorous RCT might not match real world implementations. Nevertheless, most of the time I want a randomized trial.) No one had done an RCT (randomized controlled trial) in this fashion, and I expect any such study to have a really hard time passing an ethics board when I get numerous calls to help alleviate emergency blood need at a number of times throughout the year. However, Quebec noticed that their screening procedures were too strict: a large group of people were being rejected when they were in fact perfectly healthy. The rejection trigger didn’t appear to otherwise correlate with health, so this was about as good a randomized experiment as we were going to get. Their results were reported in “Iron and cardiac ischemia: a natural, quasi-random experiment comparing eligible with disqualified blood donors” (2013, N=63,246): » Donors vs nondonors, RR = 1.02[0.92, 1.13] In other words, there was basically no correlation. In fact, in another section of the paper the authors could get the correlation to come back by slicing their data in a way that better matched the healthy donor process. The usual hallmarks of science laypeople can pick apart aren’t there: the N is large, there’s a large cross-section of the community (no elderly Hispanic women effect) and there’s no way to even fudge our interpretation of the numbers: we’re not beholden to science’s fetish with p=0.05, so failing the 95% CI could be okay if it were definitely leaning in the right direction. But it’s almost exactly in the middle. The effect isn’t there or is so tiny that it’s not worth considering. So that’s an example of how things can look like great interventions, and then turn out to have basically no effect. With our skeptic hats firmly in place, let’s dive into the rest! # Easy, Effective ## Vitamin D Vitamin D gets the stamp of approval from both Cochrane and Gwern[4]. Lots of big randomized studies have been done with vitamin D supplementation, so the effect size is pretty pinned down. From “Vitamin D supplementation for prevention of mortality in adults” (2012, N=95,286, Cochrane): » Supplementation with vitamin D vs none, RR = 0.94[0.91, 0.98] Another meta-analysis used by Gwern, “Vitamin D with calcium reduces mortality: patient level pooled analysis of 70,528 patients from eight major vitamin D trials” (2012, N=70,528): » Supplementation with vitamin D vs none, HR = 0.93[0.88, 0.99] You might think that one side of the CI is pretty bad, since RR = 0.98 means the intervention is almost the same as the control. On the other hand, (1) wait until you read the rest of the post (2) keep in mind that it’s very cheap to supplement vitamin D. Your local drugstore probably has a years worth for$20. In a pinch, more sunlight also works, but if you have darker skin, sunlight is less effective.
If you’re interested, there’s lots of hypothesizing on the mechanisms by which more vitamin D impacts things like cardiovascular health (overview).
(If you want a striking visual example of vitamin D precursors correlating with cancer, there’s a noticable geographic gradient in certain cancer deaths; “An estimate of premature cancer mortality in the U.S. due to inadequate doses of solar ultraviolet-B radiation” (2002) states that some cancers are twice as prevalent in the northern US than the southern. There’s more sun in the south, and sunlight helps synthesize vitamin D. Coincidence?! If you want to, you can see this effect yourself by going to the Cancer Mortality Maps viewer from the National Cancer Institute and taking a look at the bladder, breast, corpus uteri or rectum cancers.)
# Difficult, but Effective
## Exercise
Exercising is hard work, but it pays off big.
From “Domains of physical activity and all-cause mortality: systematic review and dose–response meta-analysis of cohort studies” (2011, N=unknown subset of 1,338,143[5]):
» Comparing people that get 300 minutes of moderate-vigorous exercise/week vs sedentary populations, RR = 0.74[0.65, 0.85]
Unfortunately, “moderate-vigorous” is pretty vague, and the number of multiple comparisons being made is breathtaking.
MET-h is a unit of energy expenditure roughly equivalent to sitting and doing nothing for an hour. Converting different exercises (or intensities of exercise) to MET-h measures can allow directly comparing/aggregating different exercise data. This also makes it easier to decide exactly what “moderate-vigorous” exercise is, roughly mapping to less than 3 MET/h for light, 3-6 for moderate, and above 6 for vigorous.
With this in mind, we can get a regression seeing how additional MET-hs impact RR. From the previous study (2011, N=unknown subset of 844,026):
» +4 MET-h/day, RR = 0.90[0.87, 0.92] (roughly mapping to 1h of moderate exercise)
» +7 MET-h/day, RR = 0.83[0.79, 0.87] (roughly mapping to 1h vigorous exercise)
There’s a limit, though: exercising for too long, or too hard, will eventually stop providing returns. The same study places the upper limit at around a maximum RR = 0.65 when comparing the highest and lowest activity levels. The Mayo Clinic in “Exercising for Health and Longevity vs Peak Performance: Different Regimens for Different Goals” recommends capping vigorous exercise at 5 hours/week for longevity.
A quick rule of thumb is that each hour of exercise can return 7x time dividends (news article). This sounds great, but do some math: put this return together with the 5 hours/week limit, assume that you’re 20yo and doing the maximum exercise you can until 60, and this works out to adding roughly 8 years to your life (note that the study the rule of thumb is based on (2012) gives a slightly lower average maximum gain, around 7 years). Remember the Gompertz curve? We can huff and puff to get great RRs, and it only helps a bit. Unfortunate.
(While we’re exercising: keep in mind that losing weight isn’t always good: if you’re already at a health weight and start losing weight without intending to, that could be a sign that you’re sick and don’t know it yet (source).)
Other studies I looked at:
Unfortunately, most of these studies are based on surveys, which have the usual problems with self reports. There are some studies based on measuring VO2max more rigorously as a proxy for fitness, except those have tiny Ns, in the tens if they’re lucky (it’s expensive to measure VO2max!).
## Diet
Overall, many of these studies are observational and based on self-reports; a few are based on randomized provided food, but the economics dictate they have smaller Ns. I’ve put all the diet-related things together, since in aggregate they are fairly impactful (if difficult to put into practice), but note that some of the subheadings contain less certain results.
### Fruit and vegetables
» +1 serving fruit or vegetable/day, HR = 0.95[0.92, 0.98]
Like exercise, fruits/vegetables don’t stack forever either; there’s around a 5 serving/day limit after which effects level off. Still, that adds up to around HR = 0.75, competitive with maximally effective exercise.
Potatoes are a notable exception, having a uniquely high glycemic load among vegetables; this roughly means that your blood sugar will spike after eating potatoes, which seems bad. You can find plenty of debate about whether this is in fact bad[6].
Other reports I looked at:
### Red/Processed Meat
You know bacon is bad for you, but… bacon is pretty bad for you.
From “Red Meat and Processed Meat Consumption and All-Cause Mortality: A Meta-Analysis” (2013, N=unknown subset of 1,330,352) effects from both plain red meat (hamburger, steak) and processed red meat (dried, smoked, bacon):
» Highest vs lowest consumption categories[7] for red meat, RR = 1.10[0.98, 1.22]
» Highest vs lowest consumption categories for processed red meat, RR = 1.23[1.17, 1.28]
There isn’t all-cause data I could find on fried foods specifically, but “Intake of fried meat and risk of cancer: A follow-up study in Finland” specifically covers cancer risks (1994, N=9,990):
» Highest vs lowest tetrile fried meat: RR = 1.77[1.11, 2.84]
Note that the confidence intervals are wide: for example, the red meat CI covers 1.0, which is pretty poor (and yet the best all-cause data I could find). If we were strictly following NHST (null hypothesis significance testing), we’d reject this conclusion. However, I’ll begrudgingly accept waggled eyebrows and “trending towards significance”[8].
If you’re paleo, you might not have cause to worry, since you’re probably eating better than most other red meat eaters, but I have no data for your specific situation.
Other reports I looked at:
### Fish (+Fish oil)
Fish is pretty good for you! Fish oil might contribute to fish “consumption”.
“Risks and benefits of omega 3 fats for mortality, cardiovascular disease, and cancer: systematic review” (2006, N=unknown subset of 36,913) looked at both fish consumption and fish oil, finding that fish/fish oil weren’t significantly different:
» High omega-3 (both advice to eat more fish, and supplementation) vs low, RR = 0.87[0.73, 1.03]
Note this analysis only included RCTs.
“Association Between Omega-3 Fatty Acid Supplementation and Risk of Major Cardiovascular Disease Events: A Systematic Review and Meta-analysis” (2012, N=68,680) looked only at fish oil supplementation:
» Omega-3 supplementation vs none, RR = 0.96[0.91, 1.02]
Note that both of these results have relatively wide CI covering 1.0. Additionally, the two studies seem to differ on the relative effectiveness of fish oil.
There’s plenty of exposition on mechanisms for why fish oil (omega-3 oil) might help in the AHA scientific statement “Fish Consumption, Fish Oil, Omega-3 Fatty Acids, and Cardiovascular Disease”.
Also make sure that you’re not eating mercury laden fish while you’re at it; just because Newton did it doesn’t mean you should.
Other studies I looked at:
### Nuts
This study of 7th Day Adventists by “Nut consumption, vegetarian diets, ischemic heart disease risk, and all-cause mortality: evidence from epidemiologic studies” points in the right direction (1999, N=34,198):
» Eating nuts <1 time/week vs >=5 times/week, fatal heart attack RR ~ 0.5[0.32, 0.75] (estimated from a graph)
However, I don’t trust it. Look at how implausibly low that RR is: eating nuts is better than getting the maximum benefit from exercise? How in the world would that work? Unfortunately, I wasn’t able to find any studies that weren’t confounded by religion, so I just have to stay uncertain for now.
## Sleep
We spend a third of our lives asleep, of course it matters. The easiest thing to measure about sleep is the length, so plenty of studies have been done on that. You want to hit a Goldilocks zone of sleep length, not too short or not too long. The literature calls this the aptly named U-shape response.
What’s too short, or too long? It’s frustrating, because one study’s “too long” can be another study’s “too short”, and vice versa.
However, from “Sleep Duration and All-Cause Mortality: A Systematic Review and Meta-Analysis of Prospective Studies” (2010, N=1,382,999):
» Too short (<4-7h), RR = 1.12[1.06, 1.18]
» Too long (>8-12), RR = 1.30[1.22, 1.38]
And from “Sleep duration and mortality: a systematic review and meta-analysis” (2009, N=unknown):
» Too short (<7h), RR = 1.10[1.06, 1.15]
» Too long (>9h), RR = 1.23[1.17, 1.30]
So there’s range right around 8 hours that most studies can agree is good.
You might be fine outside of the Goldilocks zone, but if you haven’t made special efforts to get into the zone, you might want to try and get into that 7-9h zone the studies can generally agree on.
Again, most of these studies are survey based. I can’t find the source, but a possible unique confounder is that sleeping unusually long might be a dependent, not independent variable: if you’re sick but don’t know it, one symptom could manifest as sleeping more.
And, if you get enough sleep but feel groggy, you might want to get checked out for sleep apnea.
Other studies I looked at:
# Less Effective
## Flossing
The original longevity guide was enthusiastic about flossing. Looking at “Dental Health Behaviors, Dentition, and Mortality in the Elderly: The Leisure World Cohort Study” (2011, N=690), it’s hard not to be:
» Among daily brushers, never vs everyday flossers, HR = 1.25[1.06, 1.48]
Even more exciting is the dental visit results (N=861):
» Dental exam twice/year vs none, HR = 1.48[1.23, 1.79]
However, the study primarily covers the elderly with an average age of 81yo. Sure, one hopes that the effects are universal, but the non-representative population makes it hard to do so. So while flossing looks good, I’m not ready to trust one study, especially when I can’t find a reasonable meta-analysis covering more than a few hundred people.
As a counterpoint, Cochrane looked at flossing specifically in “Flossing to reduce gum disease and tooth decay” (2011, N=1083), finding that there’s weak evidence for reduction in plaque, but basically nothing else.
I’ll keep flossing, but I’m not confident about the impact of doing so.
Other studies I looked at:
## Sitting
Sitting down all day might-maybe-possibly be bad for health outcomes.
There are some studies trying to measure the impact of sitting length. From “Daily Sitting Time and All-Cause Mortality: A Meta-Analysis” (2013, N=595,086):
» +1 hour sitting with >7 hours/day sitting, HR = 1.05[1.02, 1.08]
However, the aptly named “Does physical activity attenuate, or even eliminate, the detrimental association of sitting time with mortality? A harmonised meta-analysis of data from more than 1 million men and women” (2016, N=1,005,791, no full text) claims the correlation only holds at low levels of activity: once people start getting close to the exercise limit, this study found the correlation between sitting and all-cause mortality disappeared.
» Sitting >6 hours vs <3 hours/day (leisure time), RR 1.17[1.11, 1.24]
Note that this is the effect for men: the effect for women is larger. Also, this study directly contradicts the other study, claiming that sitting time has an effect on mortality regardless of activity level. And who in the world sits for less than 3 hours/day during their leisure time? Do they just not have leisure time?
Again, these studies were survey based.
The big unanswered question in my mind is whether exercising vigorously will just wipe the need to not sit. So, I’m not super confident you should get a fancy sit-stand desk.
(However, I do know that writing this post meant so much sitting that my butt started to hurt, so even if it’s not for longevity reasons I’m seriously considering it.)
Other reports I looked at:
## Air quality
Air quality has a surprisingly small impact on all-cause mortality.
From “Meta-Analysis of Time-Series Studies of Air Pollution and Mortality: Effects of Gases and Particles and the Influence of Cause of Death, Age, and Season” (2011, N=unknown (but aggregated from 109 studies(?!))):
+31.3 μg/m3 PM10RR = 1.02[1.015, 1.024]
+1.1 ppm CO, RR = 1.017[1.012, 1.022]
+24.0 ppb NO2RR = 1.028[1.021, 1.035]
+31.2 ppb O3 daily max, RR = 1.016[1.011, 1.020]
+9.4 ppb SO2RR = 1.009[1.007, 1.012]
(I’m deriving the RR from percentage change in mortality.)
By itself the RR increments aren’t overwhelming. But since it’s expressed as an increment, if there are 50 increments present in a normal day that we can filter out ourselves, then that adds up to some real impact. The increments aren’t tiny compared to absolute values, though. For example, maximum values in NYC during the 2016 summer:
PM10 ~ 58 μg/m3
CO ~ 1.86 ppm
NO2 ~ 60.1 ppb
O3 ~ 86 ppb
SO2 ~ 7.3 ppb
So the difference between a heavily trafficked metro area and a clean room is maybe twice the percentage impacts we’ve seen, which just doesn’t add up to very much. Beijing is another story, but even then I (baselessly) question the ability of household filtration systems to make a sizable dent in interior air quality.
There are plenty of possible confounders: it seems the way these sorts of studies are run is by looking at city-level pollution and mortality data, and running the regressions on those data points.
Other studies I looked at:
## Hospital Choice
Going to the hospital isn’t great: medical professionals do the best they can, but they’re still human and can still screw up. It’s just that the stakes are really high. Like, people recommend marking on yourself which side a pending operation should be done on, to reduce chances of catastrophic error.
Quantitatively, “A New, Evidence-based Estimate of Patient Harms Associated with Hospital Care” (2013) says that 1% of deaths in the hospital are adverse deaths deaths. However, note that many adverse deaths weren’t plausibly preventable by anyone other than Omega.
If you’re having a high stakes operation done, “Operative Mortality and Procedure Volume as Predictors of Subsequent Hospital Performance” (2006) recommends taking into account a hospital’s historical morbidity rate and volume for that procedure: if you’re getting heart surgery, you want to go to the hospital that handles lots of heart surgeries, and has done so successfully in the past.
Other studies I looked at:
## Green tea
Unfortunately, there’s no all-cause mortality data on the impact of tea in general, green tea in particular. We might expect it to have an effect through flavonoids.
As a proxy, though, we can look at blood pressure, where lower blood pressure is better. From “Green and black tea for the primary prevention of cardiovascular disease” (2013, N=821):
» Systolic blood pressure, -3.18[-5.25, -1.11] mmHg
» Diastolic blood pressure, -3.42[-4.54, -2.30] mmHg
There’s a smaller effect from black tea, around half the size.
Cochrane also looked at green tea prevention rates for different cancers. From “Green tea for the prevention of cancer” (2009, N=1.6 million), it’s unclear whether there’s any strong evidence of effect for any cancer, in addition to there being a possible garden of forking paths.
If you’re already drinking tea, like me, then switching to green tea is low cost despite any questions about efficacy.
# Borderline efficacy
## Baby Aspirin
The practice of taking tiny daily doses of aspirin, mainly to combat cardiovascular disease. From “Low-dose aspirin for primary prevention of cardiovascular events in Japanese patients 60 years or older with atherosclerotic risk factors: a randomized clinical trial.” (2014, N=14,464):
» Aspirin vs none, aggregate cardiovascular mortality HR = 0.94[0.77, 1.15]
That CI width is very concerning; you can cut the data so you get subsets of cardiovascular mortality to become significant, like looking at only non-fatal heart attacks, but it’s not like there’s a breath of correcting for multiple comparisons anywhere, and the study was stopped early due to “likely futility”.
The side effects of baby aspirin are also concerning. Internal bleeding is possible (Mayo clinic article), since aspirin is acting as a blood thinner; however, it isn’t too terrible, since it’s only a 0.13% increase in “serious bleeding” that resulted in hospitalization (from “Systematic Review and Meta-analysis of Adverse Events of Low-dose Aspirin and Clopidogrel in Randomized Controlled Trials” (2006)).
More concerning is the stopping effect. “Low-dose aspirin for secondary cardiovascular prevention – cardiovascular risks after its perioperative withdrawal versus bleeding risks with its continuation – review and meta-analysis” looked into cardiovascular risks when stopping a baby aspirin regime before surgery (because of increased internal bleeding risks), and found that a low single-digit percentage of heart attacks happened shortly after aspirin discontinuation. (I’m having trouble interpreting this report.)
I imagine this is why professionals start recommending baby aspirin to folks above 50yo, since the risks of heart attack start to obviously outweigh the costs of taking aspirin constantly. And speaking of cost: baby aspirin is monetarily inexpensive.
Other studies I looked at:
## Meal Frequency
Some people recommend eating smaller meals more frequently, particularly to lose weight, which is tied to health outcomes.
From “Effects of meal frequency on weight loss and body composition: a meta-analysis” (2015, N=unknown):
» +1 meal/day, -0.27 ± 0.11 kg of fat mass
It’s not really an overwhelming result; taking into account the logistical overhead of planning out extra meals in a society based on 3 square meals a day, is it really worth it to lose maybe half a kilogram of fat?
Other studies I looked at:
## Caloric Restriction
Most longevity folks are really on board the caloric restriction (CR) train. There’s an appealing mechanism where lower metabolic rates produce fewer free radicals to damage cellular machinery, and it’s the exact amount of effort that one might expect from a longevity intervention that actually works.
A common example of CR is the Japanese Ryukyu islands, where there are a surprising number of really old people, who eat a surprisingly low number of calories. However, say it with me: con-found-ed to he-ll! The fact that a single isolated subsection of a single ethnic group have a correlation between CR and longevity doesn’t make me confident that I too can practice CR and tell death to fuck off for a few more years.
So we want studies. Unfortunately, most humans fall into the state of starving and lacking essential nutrients, or having enough calories and nutrients, but almost never the middle ground of having too few calories but all the essential nutrients (2003, literature review). Then there’s the ethics of getting humans to agree to a really long study that controls their diet, so let’s look at animal studies first.
However, different rhesus monkey studies give different answers.
» From “Impact of caloric restriction on health and survival in rhesus monkeys from the NIA study” (2012, N=unknown, no full text), there was no longevity increase from young or old rhesus monkeys.
» However, from “Caloric restriction delays disease onset and mortality in rhesus monkeys” (2009, N=76), there was a 30% reduction in death over 20 years.
Thankfully they’re both randomized, but it doesn’t really help when they end up with conflicting conclusions. You’d hope there would be better support even in animal models for something that should have huge impacts.
What else could we look at? We’re not going to wait for an 80-year human study to finish (the ongoing CALERIE study comes close), so maybe we could look at intermediate markers that are known to have an impact on longevity and go from there.
A CALERIE checkpoint study, “A 2-Year Randomized Controlled Trial of Human Caloric Restriction: Feasibility and Effects on Predictors of Health Span and Longevity” (2015, N=218), looks at the impact of 25% CR on blood pressure:
» Mean blood pressure change, around -3 mmHg (read from a chart)
Pretty good, but that’s also around the impact of green tea. Then, there’s the implied garden of forking paths bringing in multiple comparisons, since the study in the same cluster looks at multiple types of cholesterol and insulin resistance markers.
Finally, there’s the costs: you have to exert plenty of willpower to actually accomplish CR. For something with such large costs, the evidence base just isn’t there.
## Chocolate
Chocolate has some impact on blood pressure. “Effect of cocoa on blood pressure” (2017, N=1804, Cochrane) finds that eating chocolate lowers your blood pressure:
Systolic blood pressure, -1.76[-3.09, -0.43] mmHg
Diastolic blood pressure, -1.76[-2.57,-0.94] mmHg
However, if you’re normotensive then there’s no impact on blood pressure, and only taking into account hypertensives the effect jumps to -4 mmHg. Feel free to keep eating your chocolate, but don’t expect miracles.
## Social Interaction
Having a social life looks like a really great intervention.
From “Social Relationships and Mortality Risk: A Meta-analytic Review” (2010, N=308,849):
» Weaker vs stronger relationships, OR = 1.50[1.42, 1.59]
And from “Social isolation, loneliness, and all-cause mortality in older men and women” (2013, N=6500):
» Highest vs other quintiles of social isolation, HR = 1.26[1.08, 1.48]
And from “Marital status and mortality in the elderly: A systematic review and meta-analysis” (2007, N>250,000, no full text):
» Married vs all currently non-married, RR = 0.88[0.85, 0.91]
You can propose a causal mechanism off the top of your head: people with more friends are less depressed which just has good health outcomes.
However, the alarm bells should be ringing: is the causal relationship backwards? Are healthier people more prone to socializing? Do the confounders never end? The kicker is that all these studies are looking at the elderly (above 50yo at least), which reduces their general applicability even more.
Other studies I looked at:
## Cellphone Usage
Remember when everyone was worried that chronic cellphone usage was going to give us all cancer?
Well “Mobile Phone Use and Risk of Tumors: A Meta-Analysis” (2008, N=37,916) says it actually does:
» Overall tumor, OR = 1.18[1.04, 1.34]
» Malignant tumor, OR = 1.00[0.89, 1.13]
Since we’re worried about malignant tumors, it’s hard to say we should be worried by cellphones.
Other studies I looked at:
# Unproven
## Confusing thirst with hunger
Some people recommend taking a drink when you feel hungry, the idea being that thirst sometimes manifests as hunger, and you can end up eating fewer calories.
Unfortunately, I couldn’t find any studies that tried to look into this specifically: the closest thing I found was “Hunger and Thirst: Issues in measurement and prediction of eating and drinking” (2010) which reads like a freshman philosophy paper, and “Thirst-drinking, hunger-eating; tight coupling?” (2009, N=50?) which fails to persuade me about… anything, really.
## Stress Reduction in a Pill
There are some “natural” plants rumored to have stress reduction effects, Rhodiola rosea and Ashwagandha root.
Meta-analysis on Rhodiola, “The effectiveness and efficacy of Rhodiola rosea L.: A systematic review of randomized clinical trials” (2011, N=unknown) found that Rhodiola had effects on something, but the study was basically a fishing expedition. Even the study name betrays that it doesn’t matter what it’s effective at, just that it’s effective.
Another meta-analysis, “Rhodiola rosea for physical and mental fatigue: a systematic review” (2012, N>176) looked specifically at fatigue and found mixed results.
Meta-analysis on Ashwagandha, “Prospective, Randomized Double-Blind, Placebo-Controlled Study of Safety and Efficacy of a High-Concentration Full-Spectrum Extract of Ashwagandha Root in Reducing Stress and Anxiety in Adults” (2012, N=64) found reductions in self-reported stress scales and cortisol levels (and with RCTs!).
Look, the Ns are tiny, and the studies the meta-analyses are based on are old, and who knows if the Russians were conducting their side of the studies right (Rhodiola originated in Russia, so many of the studies are Russian).
I’m including this because I got excited when I saw it in the original longevity post: stress reduction in a pill! Why do the hard work of meditation when I could just pop some pills (a very American approach, I know)? It just doesn’t look like the evidence base is trustworthy, and my personal experiences confirm that if there’s an effect it’s subtle (Whole Foods carries both Rhodiola and Ashwagandha, so you can try them out for yourself for like \$20).
Other studies I looked at:
## Water Filters
Unfortunately, there’s basically no research on health effects from water filtration in 1st world countries above and beyond municipal water treatment. Most filtration research is either about how adding any filtration to 3rd world countries has massive benefits, or how bacteria can grow on activated carbon granules. Good to know, but on reflection did we expect bacteria to stop growing wherever it damn well pleases?
So keep your Brita filter, but it’s not like we know for sure whether it’s doing anything either. Probably not worth it to go out of your way to get one.
## Hand sanitizer
So I keep hand sanitizer in multiple places in my apartment, but does it do anything?
I only found “Effectiveness of a hospital-wide programme to improve compliance with hand hygiene” (2000, N=unknown), which focused on hospital health outcomes impacted by hand washing adherence. First, not all doctors wash their hands regularly (40% compliance rates in 2011) (scholarly overview), which is worrying. Second, there’s a positive trend between hand washing (including hand sanitizers) and outcomes:
» From moving 48% hand washing adherence to 66%, the hospital-wide infection rate decreased from 16.9% to 9.9%.
However, keep in mind that home and work are usually less adverse environments than a hospital; there are fewer people with compromised immune systems, there are fewer gaping wounds (hopefully). The cited result is probably an upper bound for us non-hospital folk.
(There’s also this cute study: hand sanitizer contains chemicals that make it easier for other chemicals to penetrate the skin, and freshly printed receipts have plenty of BPA on the paper. This means that sanitizing and then handling a receipt will lead to a spike of BPA in your bloodstream. I presume that relative to eating with filthy hands the BPA impact is negligible, but damn it, researchers are doing these cute small scale studies instead of the huge randomized trials I want.)
Other studies I looked at:
## Doctor visits
Should you visit your doctor for a annual checkup? My conscientious side says “of course”, but my contrarian side says “of course not”.
Well, “General health checks in adults for reducing morbidity and mortality from disease” (2012, N=182,880, Cochrane) says:
» Annual checkup vs no exam, RR = 0.99[0.95, 1.03]
So basically no impact! Ha, take that, couple hour appointment!
However, The Chicago Tribune notes some mitigating factors, like the main studies the meta-analysis is based on are old, like 1960s old.
## Metformin
I didn’t look at metformin in my main study period: I knew it had some interesting results, but it also caused gastrointestinal distress, better known as diarrhea. It brings to mind the old quip: metformin doesn’t make you live longer, it just feels like it[9].
However, while I was reading Tools of TitansDominic D’Agostino floated an intriguing idea: he would titrate the metformin dose from some tiny amount until he started exhibiting GI symptoms, and then dialed it back a touch. I don’t think people have started even doing small scale studies around this, but it might be worth looking into.
# Other
There’s some stuff that doesn’t have a cost-benefit calculation attached, but I’m including anyways. Or, there are things that won’t help you, but might help the people around you.
## CPR
From “Effectiveness of Bystander-Initiated Cardiac-Only Resuscitation for Patients With Out-of-Hospital Cardiac Arrest” (2007, N=4902 heart attacks):
» Cardiac-only CPR vs no CPR, OR 1.72[1.01, 2.95]
So the odds ratio looks pretty good, except that CI is really wide, and the in absolute terms most people still die from heart attacks: administering CPR raises the chances of survival from 2.5% to 4.3%. So, spending more than a few hours practicing CPR is chasing some really tail risks[10].
However, have two people in your friend group that know CPR, and they can provide a potential buff to everyone around them (two, because you can’t give CPR to yourself). In a similar vein, the Heimlich maneuver might be good to know.
Other studies I looked at:
## Smoke Alarm testing
Death by fire is not super common. That said, these days it’s cheap to set up a reminder to check your alarm on some long interval, like 6 months.
## Quikclot
It’s unlikely you’ll need to do trauma medicine in the field, but if you’re paranoid about tail risk then quikclot (and competitors) can serve as a buttress against bleeding out. Some folks claim that tourniquets are better, but the trauma bandages are a bit more versatile, since you can’t tourniquet your chest.
It’s not magical: since the entire thing becomes a clot, it’s basically just moving a life threatening wound from the field into a hospital. Also make sure to get the bandage form, not the powder; some people have been blinded when the wind blew the clot precursor into their eyes.
# Cryonics
Of course, this post wouldn’t be complete without a nod to cryonics. It’s the ultimate backstop. If there all else fails, there’s one last option to make a Hail Mary throw into the future.
Obviously there are no empirical RR values I can give you: you’ll have to estimate your own probabilities and weigh your own values.
# WTF, Science?
The overarching story is that we cannot trust anything, because almost all the studies are observational and everything could be confounded to hell by things outside the short list that every study incants they controlled for and we would have no idea.
Like Gwern says, even the easiest things to randomize, like giving people free beer, aren’t being done, much less on a scale that could give us some real confidence.
There is too little disregard for the garden of forking paths in this post-replication crisis world, and many studies are focused on subgroups that plausibly won’t generalize (ex. the elderly).
And what’s up with the heterogeneity in meta-analyses? If every single analysis results in “these results displayed significant heterogeneity”, then what’s the point? What are we doing wrong?
# What am I doing?
Maybe you want to know what me myself am doing; I suspect people would be interested for the same reason journalists intersperse a perfectly good technical thriller with human interest vignettes, so here:
• Continuing vitamin D supplementation, and getting a couple minutes of sun when I can.
• Making an effort to eat more vegetables, less bacon/potatoes (to be honest, I’m more optimistic about cutting out the bacon than potatoes), more fish, and replacing more of my snacking with walnuts.
• Keep taking fish oil.
• Exercise better: I haven’t upped the intensity of my routine in a while. I probably need some more aerobic work, too.
• Tell myself I should iron out my sleep schedule.
• Get myself a standing desk for home: I have a standing desk at work, so I’m already halfway there.
• Buy an air filter: low impact, but whatever, gimmie my percentage points of RR.
• Switch from drinking black tea to green tea.
• Cut back on donating blood. I’ll keep doing it because it’s also wrapped up in “doing good things”, but I was doing it partly selfishly based on the non-quasi-randomized studies. Besides, I have shitty blood.
# TLDR
Effective and certain:
• Supplement vitamin D.
Effective, possibly confounded:
• Exercise vigorously 5 hours/week.
• Eat more fruits and vegetables, more fish, less red meat, cut out the bacon.
• Get 7-9 hours of sleep.
Less effective, less certain:
• Brush your teeth and floss daily.
• Try to not sit all day.
• Regarding air quality, don’t live in Beijing.
[1] If you need me to go through the science of smoking, then let me know and I can do so: I mostly skipped it because I’m already not smoking, and the direction of my study was partly determined by what could be applicable to me. As a non-smoker, I didn’t even notice it was missing until a late editing pass.
[2] The abstract reports results in terms of percentage mortality decrease, which I believe maps to the same RR I gave.
[4] The Cochrane Group does good, rigorous analysis work. Gwern is an independent researcher in my in group, and he seems to be better at this sort of thing than I am.
[5] Annoyingly, some meta-analyses don’t report the aggregate sample sizes for analyses that only use a subset of the analyzed reports.
[6] For example, Scott’s review of The Hungry Brain points out that some people think potatoes are great at satiating appetites, so it might in fact work out in favor of being okay.
[7] These category comparisons are loose, since some studies will report quartiles and others will use tertiles, so the analysis simply goes with the largest effect possible across all studies.
[8] Yes, it’s fucking stupid I have to stoop to this.
[9] Originally “marriage doesn’t make you live longer, it just feels like it.”
[10] I know, it’s ironic that I’m calling this a tail risk, when we’re pushing something as stubborn as the Gompertz curve. | {} |
us add an adverse numbers and zero to natural sequence to do it closed under subtraction, the very same thing wake up with division (rational numbers) and also root that -1 (complex numbers).
You are watching: Are natural numbers closed under division
Why this cheat isn"t performed with department by zero?
You deserve to add department by zero come the rational numbers if you"re careful. Let"s say that a "number" is a pair of integers written in the kind $a\over b$. Normally, us would also say that $b\not=0$, but today we"ll omit that. Let"s call numbers the the kind $a\over 0$ warped. Numbers the aren"t warped are straight.
We usually favor to say that $a\over b = c\over d$ if $ad=bc$, however today we"ll border that and also say it holds only if no $b$ no one $d$ is 0. Otherwise we"ll gain that $1\over 0 = 2\over 0 = -17\over 0$, i beg your pardon isn"t as exciting as it could be. However even through the restriction, us still have actually $1\over 2=2\over 4$, for this reason the right numbers quiet behave together we expect. In particular, us still have the consistent integers: the integer $m$ appears as the right number $m\over 1$.
Addition is defined as usual: $a\over b + c\over d = ad+bc\over bd$. Therefore is multiplication: $a\over b \cdot c\over d = ac\over bd$. Keep in mind that any sum or product that consists of a warped number has actually a warped result, and any sum or product that consists of $0\over 0$ has actually a the an outcome $0\over 0$. The warped numbers are choose a hole that you can fall into but you can"t climb the end of, and $0\over 0$ is a depth hole inside the very first hole.
Now, as chris Eagle indicated, something have to go wrong, but it"s no as bad as it might seem in ~ first. Addition and multiplication are still commutative and associative. Girlfriend can"t actually prove that $0=1$. Let"s go through Chris Eagle"s proof and see what walk wrong. Chris Eagle start by composing $1/0 = x$ and then multiplying both sides by 0. 0 in our device is $0\over 1$, for this reason we get $1\over 00\over 1 = x\cdot 0$, climate $0\over 0 = x\cdot 0$. Appropriate away the proof fails, due to the fact that it wants to have 1 on the left-hand side, but we have actually $0\over 0$ instead, which is different.
So what does walk wrong? no every number has actually a reciprocal. The reciprocal of $x$ is a number $y$ such the $xy = 1$. Warped numbers carry out not have reciprocals. You could want the mutual of $2\over 0$ to it is in $0\over 2$, yet $2\over 0\cdot0\over 2 = 0\over 0$, not $1\over 1$. So any time you desire to take the reciprocal of a number, you have to prove very first that it"s no warped.
See more: What Can Be Found In Every Skeletal Muscle? ? What Can Be Found In Every Skeletal Muscle
Similarly, warped numbers carry out not have actually negatives. There is no number $x$ v $1\over 0+x = 0$. Typically $x-y$ is characterized to it is in $x + (-y)$, and that no longer works, therefore if we desire subtraction we have to discover something else. We have the right to work around that conveniently by defining $a\over b - c\over d = ad-bc\over bd$. Yet then we lose the home that $x - y + y = x$, which just holds for directly numbers. Similarly, us can define division, however if you desire to leveling $xy÷y$ come $x$ you"ll have to prove very first that $y$ is straight.
What else goes wrong? We claimed we want $a\over b = ka\over kb$ when $a\over b$ is straight and $k\not=0$; for example we desire $1\over 2=10\over 20$. Us would likewise like $a\over b+c\over d = ka\over kb + c\over d$ under the exact same conditions. If $c\over d$ is straight, this is fine, yet if $d=0$ then we gain $bc\over 0 = kbc\over 0$. Since $bc$ might be 1, and $k$ can be any type of nonzero integer, we would have actually $p\over 0 = q\over 0$ for every nonzero $p$ and $q$. In various other words, all our warped numbers room equal, other than for $0\over 0$. We have actually a selection about whether to accept this. The alternative is to say the legislation that $a + c = b + c$ whenever $a = b$ applies only as soon as $c$ is straight.
At this allude you need to start to check out why nobody go this. Including a value $c$ to both political parties of an equation is an essential technique. If we throw out approaches as important as that, we won"t be able to solve any problems. On the other hand if we store the techniques and make every the warped numbers equal, then they don"t yes, really tell us anything about the answer other than that us must have actually used a warped number somewhere along the way. You never get any kind of useful outcomes from arithmetic on warped numbers: $a\over 0 + b\over 0 = 0\over 0$ for every $a$ and also $b$. And once you"re right into the warp zone friend can"t get ago out; the answer to any question entailing warped numbers is a warped number itself. Therefore if you want a useful result out, you should avoid utilizing warped number in her calculations.
So let"s to speak that any kind of calculation that consists of a warped number everywhere is "spoiled", because we"re not going to get any type of useful answer the end of it at the end. At ideal we"ll get a warped answer, and we"re most likely to gain $0\over 0$, which tells us nothing. We could like part assurance that a details calculation is not going to it is in spoiled. How can we gain that assurance? by making certain we never use warped numbers. How have the right to we stop warped numbers? Oh... By forbidding department by zero! | {} |
# How do you calculate the arcsin (sqrt(3)/2)?
Calculate $\arcsin \left(\frac{\sqrt{3}}{2}\right)$
Ans: $\frac{\pi}{6} \mathmr{and} \frac{5 \pi}{6}$
$\sin x = \frac{\sqrt{3}}{2}$ --> arc $x = \frac{\pi}{6}$.
Trig unit circle gives another arc $x = \frac{5 \pi}{6}$ that has the same sin value $\left(\frac{\sqrt{3}}{2}\right) .$ | {} |
# Math Help - subject changes in formula
1. ## subject changes in formula
y=\frac{3}{x + 2} ; make x the subject
G=\pi a^2 k ; make a the subject
A = 2\pi r^2 + 2\pi r h ; make h the subject
Any help?
Thanks so much! life savers!
2. Please use some spaces, brackets and correct symbols to make reading these easier...
Is 1) $y=\frac{3}{x + 2}$ or $y=\frac{3}{x} + 2$?
Is 2) $G=\pi a^2 k$?
Is 3) $A = 2\pi r^2 + 2\pi r h$?
3. All right and yes, 1) its the first option you put forward.
Would be great help if I could get some answers! | {} |
Find the horizontal asymptote to the following function.
How would you find the horizontal asymptote to the following function: $$f_{(x)}=\frac{3e-4}{2e-2}+\frac{e}{2e-2}e^{-x}$$
• What have you tried? What do you know about the exponential function $e^{-x}$, in particular for $x \to \infty$ and for $x \to -\infty$? – PenasRaul Jun 21 '17 at 9:11
• I should recall you that a line $y = m$ is an horizontal asymptote of $f$ whenever $\lim{x \to \infty} f(x) = m$ or $\lim{x \to -\infty} f(x) = m$ – PenasRaul Jun 21 '17 at 9:12
• Ah , so it's just the first term, got it, thanks. – Kantura Jun 21 '17 at 9:20 | {} |
# Huckel's rule for Aromaticity-what is n?
P: 10 True, n is just any integer, but it has to signify sumthing! i mean, come on, we're talking science, you can't just use any 'n' straightway without challenging its credibility. Yes, it's true that there are hardly few books which mention what 'n' is. I don't think it's there even in Morrison and Boyd. To answer the question, it's fundamentally based on the MOT: aromatic systems have 4n+2 electrons, where n is the number of pairs of degenarate bonding orbitals. Consider Benzene as an example. we concern ourselves only with the pi-orbital system. Benzene has six atomic p-orbitals, which give six pi molecular orbitals (MO's): three bonding orbitals,say $$\psi$$1, $$\psi$$2, $$\psi$$3, and three antibonding, say, $$\psi$$4, $$\psi$$5, $$\psi$$6. the 6 p-electrons arrange themselves in the 3 bonding orbitals. $$\psi$$1 has no node, while $$\psi$$2 & $$\psi$$3 have one node each. Furthermore, The energy level of the orbitals increases with increasing number of nodes. Thus, $$\psi$$1 is at a lower energy level than $$\psi$$2 and $$\psi$$3, which share the same energy level, having one node each. $$\psi$$2 and $$\psi$$3 are said to be degenerate. Benzene, thus has one pair of degenerate bonding orbitals (i.e, n=1). For higher aromatic systems, the number of pairs of degenerate bonding orbitals increases. Napthlene has 10 atomic p-orbitals, thus, 10 MO's. The 5 bonding orbitals contain 2 pairs of degenerate orbitals and along with $$\psi$$1. $$\psi$$1 can contain 2 electrons, while each degenerate pair has a capacity of 4 electrons. Thus, the rule: 4n+2, which is the configuration having all pi-bonding orbitals completely filled, associated with extra stability. | {} |
-->
## Tuesday, January 31, 2017
### Home, home on the SPREAD, where the deer and the antelope play....
I truly wish that New York had higher quality exams, as they are intended to be used to measure not only student performance but teacher performance as well.
The analysis continues with question 20 from the January 2017 regents exam in Algebra I (Common Core)
At issue here is choice (2), that refers to the spread of the data. Here, in Algebra I, the word spread should not be used. Spread can be represented many different ways, from range to interquartile range to standard deviation. To the best of my knowledge, standard deviation is not part of the Algebra I knowledge base. Even so, range and interquartile range do not go hand-in-hand: it is possible for a set with a smaller range to have a larger interquartile range.
This choice should most probably have used the word "range" instead of spread. For the record, the ranges are equal in these two sets, but the interquartile ranges are not (7 to 10, or 3 years for soccer players and 9 to 11, or 2 years for basketball players. The standard deviation for soccer is 2.05798 and for basketball is 1.81137. So using range, choice (2) is false, while using the other two measures, choice (2) is true.
Could it be the case that the word "spread' was used when the word "range" should have been used?
## Monday, January 30, 2017
### New York has to make better tests!
The above is the first section of a question from the January 2017 New York State regents exam in Algebra II (Common Core).
I believe the question needs the word "tsunami" rather than "tidal". The presence of the word "tidal" in this context illustrates the need for improved "proofreading" in the creation of these exams.
Continuing on, here is question 24 from the New York State Geometry (Common Core) regents exam:
I suspect that the word "cone" here should read "inverted cone". When used by itself, the word "cone" refers to this:
Rarely have I seen a water cup used "point up". Let me correct myself: I have never seen a cup used that way. Should a student solve the question as written, they could be perfectly correct and get an answer not listed. That situation should be avoided at all costs on a state exam.
Let's look at question 8 from the January 2017 NY regents exam in Algebra I (Common Core):
I found this question misleading, since the USPS charges 49 cents for up to 1 ounce and 21 cents for each extra ounce or fraction of an ounce. The best mathematical model would be
$${\rm{Cost}} = 49 + 21(w - 1)$$
where w is the weight of the letter in ounces and the costs are measured in cents.
I suspect the question writer was trying to come up with a "real world' application of recursive functions. My advice would be to look again. Question 20 on this test would have made a much better model, as postage must take into account portions of an ounce but mp3 sales would not.
Now comes question 14 from the same Algebra I exam;
The mathematics in this question is basically asking "Which of the following is equal to 6(16)t ?" The rest of the verbiage is due to the attempt to make the problem "real world".
Can't we just ask math questions to test math knowledge?
## Sunday, January 29, 2017
### Clarification needed!
Here is question 24 from the New York State Algebra II (Common Core) Regents exam from January 2017. Please look at it closely!
Now that you have read it carefully, take note that the domain of this question seems to run from -2 past 5. Also take note that if x is less than 1 or greater than 5, one of the sides must have a negative length. Since lengths cannot be negative, this graph can NOT be a model for the volume of a box, hence the question cannot be answered.
### A few thoughts to ponder....
Here is a question from the New York January 2017 Regents Exam in Algebra II (Common Core).
1) How many rabbits were there four weeks ago?
2) What do t and P(t) have to do with it?
3) Suppose a student thought as displayed in this chart. Would they get it right?
now 5 rabbits in 28 days 10 rabbits in 56 days 20 rabbits in 84 days 40 rabbits in 112 days 80 rabbits In 98 days Between 40 and 80
On a different note, here is question 21. Read it carefully, then answer a couple questions below.
1) Can you tell me who gets away with no credit card payments for 73 months? Could I stretch it out another 300 months?
2) If this is supposed to be a "real world" question, can you tell me what world that is?
## Saturday, January 28, 2017
### Technology gives us new ways to look at simple things..
Far too often I hear people talk about how hard math is, which kind of bugs me because math itself is not hard nor easy. It just is. When they say math is hard they are actually referring to their interaction with mathematics. With today's technology, a person's initial contacts with mathematics can and should be drastically different than was even possible even a just a few years ago. I know that many people have a bad taste for math because they spent years in school struggling to tread water while being placed in a depth that was just over their heads. The panacea effect of calculators (which never really helped) was really just a matter of putting flippers on a non-swimmer.
Every once in while I like to take new technology and use it to look for new ways to teach old concepts, hoping to enable beginners to gain a comfort with the shallow water. Even while doing this care must be taken, as one can drown in just an inch of water. Such an attempt is shown here.
This file is nothing more than part of an attempt to enable students to internalize the concept of slope. My goal is to expand this simple file over time so that a path can be blazed that will connect this simple concept with unique slope of a line, slope-intercept equations, parallel lines, right-triangle trigonometry, and more.
Take note that this file requires the viewer to be able to count. Even such a simple item as the "slope formula" is not needed.
This file can be found here.
## Wednesday, January 25, 2017
### Comparing GeoGebra and Desmos on a Graphing Task
This file I created just as a personal challenge. I make a point to try something new and different every day, and this file I took as a challenge merely because I came across a Desmos version (see it here). I prefer posting with GeoGebra as it seems to give me a bit more control. With GeoGebra I can label items on the graph and have decent control over what appears in different circumstances.
Desmos does a very good job filling the role of graphing calculator, and does it with a much better resolution than the typical handheld calculator. But, I have to be honest, GeoGebra does a lot more. (I should say it can do a whole lot more: it does have a steep learning curve at the start. It is this learning curve that encourages me to push for GeoGebra's use at young ages. A steep slope becomes less steep if it is lengthened. Any ramp user knows that.)
For the moment: I have included in the GeoGebra graph the focus and directrix and a drag-able way of showing the relationship between them and the parabola itself. The equation is also present. I have not found a way of including those features (with labels) in Desmos' embeddable graph. Help me if you can!!
## Monday, January 23, 2017
### Choosing from 4 wrong answers?
The question below is from the June 2016 Algebra II regents exam in New York. Some questions have been raised about it claiming it might confuse those who are unsure as to how to "count" multiple roots. (For clarity, a multiple root is a crossing root if the root occurs an odd number of times, and a tangent root if it occurs and even number of times.)
Actually, that issue is irrelevant in this question. Based on the accepted standard regarding the meaning of "arrows" at the end of the graph, choice 3 is the only choice satisfying the second and third bulleted items. The first bulleted item is irrelevant in this question.
Or is it? It is possible that choice 3 has at least one root of odd multiplicity of 3 or more, but without a scale on the graph it is impossible to tell.
Or is it? Something did not seem right about choice 3. (Continued below)
I had to do some investigating.
In GeoGebra I created a file including the graphic from choice 3 along with a cubic polynomial graph sharing the x-intercepts with those in choice 3. Since there were no scales on the axes in any of the choices, the only information I could rely on was the relative positions of key points.
Here is the GeoGebra sketch (get it here if you wish):
The dotted blue line marks midway between the leftmost x-intercepts.
No matter what I do to change what is changeable (experiment with changing leading coefficient or absolute position of leftmost root) the polynomials maximum and choice 3's maximum lie on opposite sides of that line.
I will not claim that I have the definitive answer on this issue, but it appears to me that the more you know about cubic polynomial graphs the less likely you are to accept choice 3 as an answer to this question. The problem could have been avoided if actual polynomial graphs had been given.
I believe I understand what the question writers intended with this question, but I must reject the question itself.
## Saturday, January 14, 2017
### How good is your sense of time?
This GeoGebra creation is the result of a discussion involving the "hang time" of a kick in one of the NFL playoff games. The thought smashed into my head later when I heard someone say something like "I don't know how long it was, but it felt like hours".
This is a simple creation (original available here). The duration ranges from 3 seconds to 15 seconds.
## Tuesday, January 10, 2017
### Triangle of Time
Sometimes GeoGebra spurs me on to a new way of looking at something old and familiar.
In this case, a clock. Yesterday I posted the clock I made in GeoGebra. Today I have tweaked it a bit to help pose a question.
The three hands are all pointing to 3 points on the circumference of the circle. These 3 points form a triangle. Can we work with the area of that circle?
1) The smallest area of the circle is zero. How often does that happen in a 12 hour time span?
2) What is the largest area? How often does that happen in 12 hours?
A stretch for trig or precalculus students might be generating a graph of the area as a function of time of day. A stretch for calculus students might involve determining exactly when the area is largest by maximizing that function.
A stretch for younger students might be generating this graph on their own. It is too bad that the politics of education make it virtually impossible for a math teacher to take time to work with students on questions like these.
I suspect textbook publishers do not like questions such as this!
## Monday, January 9, 2017
### Can you build a clock?
Many of us take clocks for granted, but there is not a simple clock anywhere. behind every timepiece is a great deal of mathematics together with either metallurgy or engineering or chemistry or electronics or programming, or maybe all of those!
Here is a basic dial clock made totally within GeoGebra.
GeoGebra does have a feature that allows it to read the time off of your computer, but what happens then is whatever you do with it.
## Friday, January 6, 2017
### Focus, Directrix, and Conics
Back in the classroom I used to wish I had a better way to demonstrate the focus-directrix connections between the conics. The algebraic methods were time-consuming and, I know, contributed to "brain stoppage" by many students. Pencil and paper constructions, taken to a useful stage, would have taken days and days. If only there was a better way...
GeoGebra helped me begin to bridge that gap. My first go at it has produced the file here. Not perfect, but a lot better than what I had before. Check it out and EXPERIMENT!!!
Unfortunately, making this fit in the blog requires it to be tiny. Click here for full version | {} |
시간 제한 메모리 제한 제출 정답 맞은 사람 정답 비율
2 초 512 MB 210 91 71 43.558%
## 문제
Farmer John's $N$ cows ($1 \leq N \leq 10^5$), numbered $1 \ldots N$ as always, happen to have too much time on their hooves. As a result, they have worked out a complex social hierarchy related to the order in which Farmer John milks them every morning.
After weeks of study, Farmer John has made $M$ observations about his cows' social structure ($1 \leq M \leq 50,000$). Each observation is an ordered list of some of his cows, indicating that these cows should be milked in the same order in which they appear in this list. For example, if one of Farmer John's observations is the list 2, 5, 1, Farmer John should milk cow 2 sometime before he milks cow 5, who should be milked sometime before he milks cow 1.
Farmer John's observations are prioritized, so his goal is to maximize the value of $X$ for which his milking order meets the conditions outlined in the first $X$ observations. If multiple milking orders satisfy these first $X$ conditions, Farmer John believes that it is a longstanding tradition that cows with lower numbers outrank those with higher numbers, so he would like to milk the lowest-numbered cows first. More formally, if multiple milking orders satisfy these conditions, Farmer John would like to use the lexicographically smallest one. An ordering $x$ is lexicographically smaller than an ordering $y$ if for some $j$, $x_i = y_i$ for all $i < j$ and $x_j < y_j$ (in other words, the two orderings are identical up to a certain point, at which $x$ is smaller than $y$).
## 입력
The first line contains $N$ and $M$. The next $M$ lines each describe an observation. Line $i+1$ describes observation $i$, and starts with the number of cows $m_i$ listed in the observation followed by the list of $m_i$ integers giving the ordering of cows in the observation. The sum of the $m_i$'s is at most $200,000$.
## 출력
Output $N$ space-separated integers, giving a permutation of $1 \ldots N$ containing the order in which Farmer John should milk his cows.
## 예제 입력 1
4 3
3 1 2 3
2 4 2
3 3 4 1
## 예제 출력 1
1 4 2 3
## 힌트
Here, Farmer John has four cows and should milk cow 1 before cow 2 and cow 2 before cow 3 (the first observation), cow 4 before cow 2 (the second observation), and cow 3 before cow 4 and cow 4 before cow 1 (the third observation). The first two observations can be satisfied simultaneously, but Farmer John cannot meet all of these criteria at once, as to do so would require that cow 1 come before cow 3 and cow 3 before cow 1.
This means there are two possible orderings: 1 4 2 3 and 4 1 2 3, the first being lexicographically smaller. | {} |
## Addition of Fractions Exercises – Set 2
1.) $\dfrac{1}{7} + \dfrac{2}{7}$
2.) $\dfrac{1}{12} + \dfrac{5}{12}$
3.) $\dfrac{7}{15} + \dfrac{2}{15} + \dfrac{6}{15}$
4.) $\dfrac{1}{3} + \dfrac{1}{4}$
5.) $\dfrac{2}{9} + \dfrac{1}{3}$
6.) $\dfrac{3}{8} + \dfrac{1}{4}$
7.) $8 \dfrac{2}{3} + \dfrac{3}{2}$
8.) $\dfrac{1}{2} + \dfrac{2}{3} + \dfrac{3}{4}$
9.) $7 \dfrac{2}{7} + 6 \dfrac{3}{14}$
10.) $3 \dfrac{1}{4} + 2 \dfrac{1}{8}$
1.) $\dfrac{3}{7}$
2.) $\dfrac{6}{12} = \dfrac{1}{2}$
3.) $\dfrac{15}{15} = 1$
4.) $\dfrac{7}{12}$
Solution:
LCD: 12
$\dfrac{4}{12} + \dfrac{3}{12} = \dfrac{7}{12}$
5.) $\dfrac{5}{9}$
Solution:
LCD: 9
$\dfrac{2}{9} + \dfrac{3}{9} = \dfrac{5}{9}$
6.) $\dfrac{5}{8}$
Solution:
LCD: 8
$\dfrac{3}{8} + \dfrac{2}{8} = \dfrac{5}{8}$
7.) $10 \dfrac{1}{6}$
Solution:
LCD: 6
$8 \dfrac{4}{6} + \dfrac{9}{6} = 8 \dfrac{13}{6} = 10 \dfrac{1}{6}$
8.) $1 \dfrac{11}{12}$
Solution:
LCD: 12
$\dfrac{6}{12} + \dfrac{8}{12} + \dfrac{9}{12} = \dfrac{23}{12} = 1 \dfrac{11}{12}$
9.) $13 \dfrac{1}{2}$
Solution:
LCD: 14
$7 \dfrac{4}{14} + 6 \dfrac{3}{14} = 13 \dfrac{7}{14} = 13 \dfrac{1}{2}$
10.) $5 \dfrac{3}{8}$
Solution:
LCD: 8
$3 \dfrac{2}{8} + 2 \dfrac{1}{8} = 5 \dfrac{3}{8}$
## LCM and GCD Exercises Set 2
Here are some Civil Service exam exercises on GCD and LCM.
1.) What is the GCD of 8, 20, and 28?
2.) What is the GCD of 21, 35, and 56?
3.) What is 18/54 in lowest terms?
4.) What is 38/95 in lowest terms?
5.) What is the LCM of 6 and 8?
6.) What is the LCM of 5, 6, and 12?
7.) What is the product of the LCM and the GCD of 4, 8, and 20?
8.) There are 18 red marbles and 27 blue marbles to be distributed among children. What is the maximum number of children that can receive the marbles if each kid receives the same number of marbles for each color and no marble is to be left over?
9.) In a school sportsfest, there are 60 Grade 4 pupils, 48 Grade 5 pupils and 36 Grade 6 pupils. What is the largest number of teams that can be formed if the pupils in each Grade level are equally distributed and no pupil is left without a team?
10.) In a disco, the red lights blink every 3 seconds and the blue lights blink every 5 seconds. If the two colored lights blink at the same time if you turn them on, they will blink at the same time every ___ seconds.
Solution:
Divisors of 8 – 1, 2, 4, 8
Divisors of 20 – 1, 2, 4, 5, 10, 20
Divisors of 28 – 1, 2, 4, 7, 14, 28
Solution:
Divisors of 21 – 1, 3, 7, 21
Divisors of 35 -1, 3, 5, 7, 35
Divisors of 56 -1, 2, 7, 8, 14, 28, 56
Note: Reducing fractions to lowest terms is one of the applications of GCD.
Solution:
Divisors of 18 – 1, 2, 6, 9, 18
Divisors of 54 – 1, 2, 3, 9, 18, 27, 54
Numerator = 18 divided by 18 (GCD) = 1
Denominator = 54 divided by 18 (GCD) = 3
Solution:
Divisors of 38 – 1, 2, 19, 38
Divisors of 95 – 1, 2, 3, 19, 95
Solution:
Multiples of 6 – 6, 12, 18, 24
Multiples of 8 – 8, 16, 24
Solution:
Multiples of 5 – 5, 10, 15, 20, 25, 30 … 55, 60
Multiples of 6 – 6, 12, 18, 24, 30, 36, 42, 48, 54, 60
Multiples of 12 – 12, 24, 36, 48, 60
Solution
GCD of 4, 6, and 20 is 4.
LCM of 4, 6, and 20 is 40.
4 x 40 = 160.
Solution:
Divisors of 18 – 1, 2, 3, 6, 9, 18
Divisors of 27 – 1, 2, 3, 9, 27 | {} |
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 06 May 2015, 02:33
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# A dealer originally bought 100 identical batteries at a tota
Author Message
TAGS:
Manager
Joined: 02 Dec 2012
Posts: 178
Followers: 3
Kudos [?]: 742 [1] , given: 0
A dealer originally bought 100 identical batteries at a tota [#permalink] 07 Dec 2012, 03:24
1
KUDOS
9
This post was
BOOKMARKED
00:00
Difficulty:
55% (hard)
Question Stats:
58% (02:15) correct 42% (01:29) wrong based on 410 sessions
A dealer originally bought 100 identical batteries at a total cost of q dollars. If each battery was sold at 50 percent above the original cost per battery, then, in terms of q, for how many dollars was each battery sold?
(A) 3q/200
(B) 3q/2
(C) 150q
(D) q/100
(E) 150/q
[Reveal] Spoiler: OA
Math Expert
Joined: 02 Sep 2009
Posts: 27235
Followers: 4233
Kudos [?]: 41115 [1] , given: 5670
Re: A dealer originally bought 100 identical batteries at a tota [#permalink] 07 Dec 2012, 03:27
1
KUDOS
Expert's post
1
This post was
BOOKMARKED
A dealer originally bought 100 identical batteries at a total cost of q dollars. If each battery was sold at 50 percent above the original cost per battery, then, in terms of q, for how many dollars was each battery sold?
(A) 3q/200
(B) 3q/2
(C) 150q
(D) q/100
(E) 150/q
ALGEBRAIC APPROACH:
The cost of 100 batteries is q dollars, thus the cost of 1 battery is q/100 dollars. Since the selling price is 50% greater than the cost price than the selling price is q/100*1.5=q/100*3/2=3q/200.
NUMBER PLUGGING APPROACH:
Say q=$200, then the cost of 1 battery is q/100=$2.
The selling price is 2*1.5=$3. Now, plug q=200 in the answers to see which yields$3. Only answer choice A works.
_________________
GMAT Club Legend
Joined: 09 Sep 2013
Posts: 4783
Followers: 296
Kudos [?]: 52 [0], given: 0
Re: A dealer originally bought 100 identical batteries at a tota [#permalink] 06 Jul 2014, 01:47
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Intern
Joined: 17 May 2012
Posts: 49
Followers: 0
Kudos [?]: 6 [0], given: 126
Re: A dealer originally bought 100 identical batteries at a tota [#permalink] 06 Mar 2015, 02:02
Quote:
ALGEBRAIC APPROACH:
The cost of 100 batteries is q dollars, thus the cost of 1 battery is q/100 dollars. Since the selling price is 50% greater than the cost price than the selling price is q/100*1.5=q/100*3/2=3q/200.
\
Hi Bunuel,
According to your approach the answer comes out to be 2q/300 and not 3q/200. Could you kindly explain?
Thanks,
AJ
Math Expert
Joined: 02 Sep 2009
Posts: 27235
Followers: 4233
Kudos [?]: 41115 [0], given: 5670
Re: A dealer originally bought 100 identical batteries at a tota [#permalink] 06 Mar 2015, 03:53
Expert's post
aj0809 wrote:
Quote:
ALGEBRAIC APPROACH:
The cost of 100 batteries is q dollars, thus the cost of 1 battery is q/100 dollars. Since the selling price is 50% greater than the cost price than the selling price is q/100*1.5=q/100*3/2=3q/200.
\
Hi Bunuel,
According to your approach the answer comes out to be 2q/300 and not 3q/200. Could you kindly explain?
Thanks,
AJ
Nothing wrong there: q/100*1.5 = q/100*3/2 = 3q/200.
_________________
Re: A dealer originally bought 100 identical batteries at a tota [#permalink] 06 Mar 2015, 03:53
Similar topics Replies Last post
Similar
Topics:
2 A computer store originally bought 1,000 modems at a total cost of d d 7 21 Jan 2015, 01:27
2 Each month a retailer sells 100 identical items. On each it 8 02 Aug 2014, 12:11
1 A dealer originally bought 100 identical batteries at a tota 11 03 Jul 2008, 09:43
Origin 1 16 Apr 2007, 06:04
A business bought r typewriters at \$100 per typewriter and s 11 17 Oct 2006, 07:33
Display posts from previous: Sort by | {} |
# NR Interference Modeling with Toroidal Wrap-Around
This example shows how to model a 19-site cluster with toroidal wrap-around, as described in ITU-R M.2101-0. This example uses the system-level channel model that is specified in 3GPP TR 38.901. The wrap-around provides uniform interference at the cluster edge. All the cells in the cluster operate in the same frequency band with the serving gNB at the center of the cell. You can enable or disable the wrap-around to observe that with wrap-around, the performance metrics of an edge cell become similar to the center cell.
### Introduction
This example models:
• A 19-site cluster with three cells/site, giving a total of 57 cells. A site is represented by 3 colocated gNBs with directional antennas covering 120 degrees area (that is, 3 sectors per site).
• Co-channel intercell interference with wrap-around modeling for removing edge effects.
• System-level channel model based on 3GPP TR 38.901.
• Downlink shared channel (DL-SCH) data transmission and reception.
• DL channel quality measurement by UEs, based on the CSI-RS received from the gNB.
• Uplink shared channel (UL-SCH) data transmission and reception.
• UL channel quality measurement by gNBs, based on the SRS received from the UEs.
Nodes send the control packets (buffer status report (BSR), DL assignment, UL grants, PDSCH feedback, and CSI report) out of band, without the need of resources for transmission and assured error-free reception.
### Toroidal Wrap-around Modeling
To simulate the behavior of a cellular network without introducing edge effects, this example models an infinite cellular network by using toroidal wrap-around. The entire network region relevant for simulations is a cluster of 19 sites (shown bold in this figure). The left-hand figure shows the network region of 19 sites without wrap-around. Site 0 of the central cluster, shown in red, is uniformly surrounded and experiences interference from all sides. A cell in an edge site like site 15 experiences less interference. In the right-hand figure, the wrap-around repeats the original cluster six times to uniformly surround the central cluster.
In the wrap-around model, the signal or interference from any UE to a cell is treated as if that UE is in the original cell cluster and the gNB in any of the seven clusters as specified in ITU-R M.2101-0. The distances used to compute the path loss from a transmitter node at $\left(a,b\right)$ to a receiver node at $\left(x,y\right)$ is the minimum of these seven distances.
• Distance between $\left(x,y\right)$ and $\left(a,b\right)$
• Distance between $\left(x,y\right)$ and $\left(a-\sqrt{3}D,b-4D\right)$, where $D$ is the distance between two adjacent gNBs (inter-site distance)
• Distance between $\left(x,y\right)$ and $\left(a+\sqrt{3}D,b+4D\right)$
• Distance between $\left(x,y\right)$ and $\left(a-\frac{3\sqrt{3}}{2}D,b+\frac{7}{2}D\right)$
• Distance between $\left(x,y\right)$ and $\left(a+\frac{3\sqrt{3}}{2}D,b-\frac{7}{2}D\right)$
• Distance between $\left(x,y\right)$ and $\left(a-\frac{5\sqrt{3}}{2}D,b-\frac{1}{2}D\right)$
• Distance between $\left(x,y\right)$ and $\left(a+\frac{5\sqrt{3}}{2}D,b+\frac{1}{2}D\right)$
These equations are derived from the equations in ITU-R M.2101-0 Attachment 2 to Annex 1. In 3GPP TR 38.901 and in the figure above, the rings of 6 and 12 sites around the central site are orientated differently from the orientations in ITU-R M.2101-0, so modified equations are required.
If you disable wrap-around, then the distance between nodes is the Euclidean distance.
### Scenario Configuration
Check if the Communications Toolbox Wireless Network Simulation Library support package is installed. If the support package is not installed, MATLAB® returns an error with a link to download and install the support package.
`wirelessnetworkSupportPackageCheck`
Create a wireless network simulator.
```rng('default'); % Reset the random number generator numFrameSimulation = 1; % Simulation time in terms of number of 10 ms frames networkSimulator = wirelessNetworkSimulator.init();```
Create cell sites, sectors, and UEs in each cell for the urban macro (UMa) scenario.
```numCellSites = 19; % Number of cell sites isd = 500; % Inter-site distance in meters numSectors = 3; % Number of sectors for each cell site numUEs = 2; % Number of UEs to drop per cell wrapping = true; % Enable toroidal wrap-around modeling of cells. Set it to false to disable toroidal wrap-around % Create the scenario builder scenario = h38901Scenario(Scenario="UMa",FullBufferTraffic="on",NumCellSites=numCellSites,InterSiteDistance=isd,NumSectors=numSectors,NumUEs=numUEs,Wrapping=wrapping); % Build the scenario, adding GNB and UE nodes to the simulator configureSimulator(scenario,networkSimulator);```
Create the system-level channel model for the scenario, and connect it to the simulator.
```% Create the channel model channel = h38901Channel(Scenario="UMa"); % Connect the simulator and channel model connect(channel,networkSimulator,scenario.CellSites);```
#### Logging and Visualization Configuration
Set the `enableTraces` to `true` to log the traces. If the `enableTraces` is set to `false`, then the simulation does not log traces. To speed up the simulation, set it to `false`.
`enableTraces = true;`
Set up scheduling logger and phy logger for the cells of interest.
```% Select the cells (that is, the sites and sectors) for which traces and metrics have to be collected siteOfInterest = [0 15]; sectorOfInterest = [2 2]; numCellsOfInterest = length(siteOfInterest); if enableTraces simSchedulingLogger = cell(numCellsOfInterest,1); simPhyLogger = cell(numCellsOfInterest,1); for cellIdx = 1:numCellsOfInterest % Get the gNB and UEs for the cell of interest from the scenario object [gNB,UEs] = getNodesForCell(scenario,siteOfInterest,sectorOfInterest,cellIdx); % Create an object for scheduler traces logging simSchedulingLogger{cellIdx} = helperNRSchedulingLogger(numFrameSimulation,gNB,UEs); % Create an object for PHY traces logging simPhyLogger{cellIdx} = helperNRPhyLogger(numFrameSimulation,gNB,UEs); end end```
The example updates the metrics plots periodically. Set the number of updates during the simulation.
`numMetricsSteps = 5;`
Set up metric visualizers.
```metricsVisualizer = cell(numCellsOfInterest,1); for cellIdx = 1:numCellsOfInterest % Get the gNB and UEs for the cell of interest from the scenario object [gNB,UEs] = getNodesForCell(scenario,siteOfInterest,sectorOfInterest,cellIdx); % Create visualization object for MAC and PHY metrics metricsVisualizer{cellIdx} = helperNRMetricsVisualizer(gNB,UEs,NumMetricsSteps=numMetricsSteps,PlotSchedulerMetrics=true,PlotPhyMetrics=true,CellOfInterest=siteOfInterest(cellIdx)); end ```
Write the logs to MAT-files. You can use these logs for post-simulation analysis.
`simulationLogFile = "simulationLogs";`
### Simulation
Run the simulation for the specified `numFrameSimulation` frames.
```% Calculate the simulation duration (in seconds) simulationTime = numFrameSimulation * 1e-2; % Run the simulation run(networkSimulator,simulationTime);```
### Simulation Visualization
For the cells of interest, run time visualizations show various performance indicators at multiple time steps during the simulation. For a detailed description, see the NR Cell Performance Evaluation with MIMO example.
At the end of the simulation, the achieved values for system performance indicators are compared to their theoretical peak values (considering zero overheads). Performance indicators displayed are achieved data rate (UL and DL), achieved spectral efficiency (UL and DL), and block error rate (BLER) observed for UEs (UL and DL). The peak values are calculated as per 3GPP TR 37.910.
Note that to get meaningful results, the simulation has to be run for a longer duration and for a larger number of UEs per cell.
```for cellIdx = 1:numCellsOfInterest fprintf('\n\nMetrics for site %d, sector %d :\n\n',siteOfInterest(cellIdx),sectorOfInterest(cellIdx)); displayPerformanceIndicators(metricsVisualizer{cellIdx}); end```
```Metrics for site 0, sector 2 : ```
```Peak UL Throughput: 54.64 Mbps. Achieved Cell UL Throughput: 0.81 Mbps Achieved UL Throughput for each UE: [0.4 0.4] Peak UL spectral efficiency: 2.73 bits/s/Hz. Achieved UL spectral efficiency for cell: 0.04 bits/s/Hz Block error rate for each UE in the uplink direction: [0 0] Peak DL Throughput: 75.37 Mbps. Achieved Cell DL Throughput: 9.27 Mbps Achieved DL Throughput for each UE: [0 9.27] Peak DL spectral efficiency: 3.77 bits/s/Hz. Achieved DL spectral efficiency for cell: 0.46 bits/s/Hz Block error rate for each UE in the downlink direction: [1 0] ```
```Metrics for site 15, sector 2 : ```
```Peak UL Throughput: 54.64 Mbps. Achieved Cell UL Throughput: 0.00 Mbps Achieved UL Throughput for each UE: [0 0] Peak UL spectral efficiency: 2.73 bits/s/Hz. Achieved UL spectral efficiency for cell: 0.00 bits/s/Hz Block error rate for each UE in the uplink direction: [1 1] Peak DL Throughput: 75.37 Mbps. Achieved Cell DL Throughput: 10.04 Mbps Achieved DL Throughput for each UE: [0 10.04] Peak DL spectral efficiency: 3.77 bits/s/Hz. Achieved DL spectral efficiency for cell: 0.50 bits/s/Hz Block error rate for each UE in the downlink direction: [1 0] ```
### Simulation Logs
Save the simulation logs related to cells of interest into a MAT file. For a description of the simulation logs format, see the NR Cell Performance Evaluation with MIMO example.
```if enableTraces simulationLogs = cell(numCellsOfInterest,1); for cellIdx = 1:numCellsOfInterest % Get the gNB for the cell of interest from the scenario object gNB = getNodesForCell(scenario,siteOfInterest,sectorOfInterest,cellIdx); if gNB.DuplexMode == "FDD" logInfo = struct('DLTimeStepLogs',[],'ULTimeStepLogs',[],... 'SchedulingAssignmentLogs',[],'PhyReceptionLogs',[]); [logInfo.DLTimeStepLogs,logInfo.ULTimeStepLogs] = getSchedulingLogs(simSchedulingLogger{cellIdx}); else % TDD logInfo = struct('TimeStepLogs',[],'SchedulingAssignmentLogs',[],'PhyReceptionLogs',[]); logInfo.TimeStepLogs = getSchedulingLogs(simSchedulingLogger{cellIdx}); end % Get the scheduling assignments log logInfo.SchedulingAssignmentLogs = getGrantLogs(simSchedulingLogger{cellIdx}); % Get the phy reception logs logInfo.PhyReceptionLogs = getReceptionLogs(simPhyLogger{cellIdx}); simulationLogs{cellIdx} = logInfo; end % Save simulation logs in a MAT-file save(simulationLogFile,'simulationLogs'); end```
### Local Functions
```function [gNB,UEs] = getNodesForCell(scenario,siteOfInterest,sectorOfInterest,cellIdx) siteIdx = siteOfInterest(cellIdx) + 1; sectorIdx = sectorOfInterest(cellIdx) + 1; numCellSites = numel(scenario.CellSites); if (siteIdx > numCellSites) warning('The cell site of interest (%d) does not exist, using the last cell site (%d).',siteIdx-1,numCellSites-1); siteIdx = numCellSites; end numSectors = numel(scenario.CellSites(siteIdx).Sectors); if (sectorIdx > numSectors) warning('For cell site %d, the sector of interest (%d) does not exist, using the last sector (%d).',siteIdx-1,sectorIdx-1,numSectors-1); sectorIdx = numSectors; end gNB = scenario.CellSites(siteIdx).Sectors(sectorIdx).BS; UEs = scenario.CellSites(siteIdx).Sectors(sectorIdx).UEs; end```
## References
[1] 3GPP TS 38.104. “NR; Base Station (BS) radio transmission and reception.” 3rd Generation Partnership Project; Technical Specification Group Radio Access Network.
[2] 3GPP TS 38.214. “NR; Physical layer procedures for data.” 3rd Generation Partnership Project; Technical Specification Group Radio Access Network.
[3] 3GPP TS 38.321. “NR; Medium Access Control (MAC) protocol specification.” 3rd Generation Partnership Project; Technical Specification Group Radio Access Network.
[4] 3GPP TS 38.322. “NR; Radio Link Control (RLC) protocol specification.” 3rd Generation Partnership Project; Technical Specification Group Radio Access Network.
[5] 3GPP TS 38.323. “NR; Packet Data Convergence Protocol (PDCP) specification.” 3rd Generation Partnership Project; Technical Specification Group Radio Access Network.
[6] 3GPP TS 38.331. “NR; Radio Resource Control (RRC) protocol specification.” 3rd Generation Partnership Project; Technical Specification Group Radio Access Network.
[7] 3GPP TR 37.910. “Study on self evaluation towards IMT-2020 submission.” 3rd Generation Partnership Project; Technical Specification Group Radio Access Network. | {} |
# Question 67dbe
Oct 5, 2015
Empirical Formula is ${C}_{4} {H}_{5}$ and the molecular formula is ${C}_{8} {H}_{10}$.
#### Explanation:
$\text{Number of moles of organic compound} = \frac{0.2612}{106}$
$\implies n = 2.4642 \times {10}^{-} 3 m o l$
$\text{Number of moles of } C {O}_{2} = \frac{0.8661}{44}$
$\implies n = 0.0197 m o l$
$\text{Number of moles of } {H}_{2} O = \frac{0.2250}{18}$
$\implies n = 0.0125 m o l$
The equation for the reaction is:
${C}_{a} {H}_{b} + a \cdot b \cdot {O}_{2} \to a \cdot C {O}_{2} + \frac{b}{2} \cdot {H}_{2} O$
Mole ratio of CO_2:"organic compund"= 0.0197/(2.4642xx10^-3
$= 8 : 1$
This means $8 m o l$ of $C {O}_{2}$ is produced for burning $1 m o l$ of the organic compound. The value of $a$ is therefore 8.
Mole ratio of H_2O: "organic compound" = 0.0125/(2.4642xx10^-3#
$\approx 5 : 1$
This shows $5 m o l$ of ${H}_{2} O$ is produced. So, $\frac{b}{2} = 5$, therefore $b$ is equal to 10.
Hence, the organic compound has molecular formula $C 8 H 10$. It's empirical formula would hence be ${C}_{4} {H}_{5}$. The name of this compound is Xylene, or dimethyl benzene.
**n.b. this is not an orthodox method. If this question is asked in an exam, you may not be awarded any marks for using this method. | {} |
# 5. Calculate the energy necessary to excite an electron from the second shell to fifth shell in a hydrogen atom,
###### Question:
5. Calculate the energy necessary to excite an electron from the second shell to fifth shell in a hydrogen atom, in kJ/mol.
#### Similar Solved Questions
##### How are the answers determined QUESTION 9 [4 marks] Draw in the missing reactants/product in the...
how are the answers determined QUESTION 9 [4 marks] Draw in the missing reactants/product in the following Fischer esterification reactions H2SO4 OH + Y OH - H2SO4 Moson soo acid alcohol...
##### Where and when would you expect to find dividing cells in the body of a multicellular...
Where and when would you expect to find dividing cells in the body of a multicellular organism?...
##### I was thinking of using truth tables however I would like to know the other methods involved in solving. Can you solve using at least two or more methods please? Thanks in advance. Prove that the set...
I was thinking of using truth tables however I would like to know the other methods involved in solving. Can you solve using at least two or more methods please? Thanks in advance. Prove that the set {((p-q) vr).(pv (q v s))) is satisfiable Prove that the set {((p-q) vr).(pv (q v s))) is satisfiabl...
##### Question 18 (5 points) What legal form of business should be chosen at the early stages...
Question 18 (5 points) What legal form of business should be chosen at the early stages of a business to ensure that tax benefits get transferred to the ownership? O Partnership. O "C" Corporation. OLLC. O "S" Corporation...
Determine the variable cost per gross-ton mile and the total fixed cost. ✓ A. 41% EX 19-9 Contribution margin ratio Obj. 2 A. Young Company budgets sales of $112,900,000, fixed costs of$25,000,000, and variable costs of $66,611,000. What is the contribution margin ratio for Young Company? B.... 1 answer ##### Est Review 12 of A fea jumps st leaves the groungh up to a maxmum height... est Review 12 of A fea jumps st leaves the groungh up to a maxmum height of 0 370 m of free fa your answer in Hint(s) B Complete... 1 answer ##### 4. Let Xi, . . . , xn be a random sample from the inverse Gaussian... 4. Let Xi, . . . , xn be a random sample from the inverse Gaussian distribution, IG(μ, λ), whose pdf is: (a) Show that the MLE of μ and λ are μ-X and (b) It is known that n)/λ ~X2-1. Use this to derive a 100 . (1-a)% CI for λ.... 1 answer ##### 6. Suppose events A and B are conditionally independent given C, which is written ALBIC (a)... 6. Suppose events A and B are conditionally independent given C, which is written ALBIC (a) Show that this implies that ALBIC and ALBIC and ALB-|C, where A means "not A." b) Find an example where ALBIC holds but ALBCE does not hold... 1 answer ##### Events A and B are mutually exclusive Suppose event A occurs with probability 0.03 and event... Events A and B are mutually exclusive Suppose event A occurs with probability 0.03 and event B occurs with probability 0.34 a. Compute the probability that A occurs but B does not occur b. Compute the probability that either A occurs without B occurring or B occurs without A occurring. (If necessary... 1 answer ##### Please code in C++. link to continue the code is this below or you can make... Please code in C++. link to continue the code is this below or you can make your own code if you wish(fix any mistakes if you think there are any in it): cpp.sh/3qcekv 3. Submit a header file (project3.h), a definition file (project3.cpp), a main file (main.cpp), and a makefile to compi... 1 answer ##### Each solid is composed of a rectangular prism and a regular pyramid. Find the surface area... Each solid is composed of a rectangular prism and a regular pyramid. Find the surface area of the solid. (a) 2 units 2 units 5 units 6 units X* units2 (b) 4 units TT 1.5 units - - - - - - - - - - - - - - - - - - - - - - - - - - - 13 units 1 unit x units2... 1 answer ##### I paid$35,000 for a vintage car exactly 12 years ago. Today I sold the car...
I paid $35,000 for a vintage car exactly 12 years ago. Today I sold the car for$90,000. What effective annualised interest rate did I earn? (as a percentage to the closest two decimal places; do not show the % sign. Eg 2.876% is 2.88) Answer:...
##### Please help with 3.12 and 3.14 Write a program that prompts the user to enter the...
Please help with 3.12 and 3.14 Write a program that prompts the user to enter the length of the star and draw a star, as shown in Figure 3.5a. (Turtle: display a STOP_sign) write a program that displays a STOP sign, as shown in Figure 3.5b. The hexagon is in red and the text is in white. (Turtle: d...
##### Question 4 of 23 > Classify each of the molecules according to its functional group. H-C-CH2-CH3...
Question 4 of 23 > Classify each of the molecules according to its functional group. H-C-CH2-CH3 H3C-C-0-CH2-CH3 H3C-CH2-OH HC-NH–CH2–CH3 . нс - он, -он, Hoc H3C-CH2-CH2-C-OH H3C-CH-CH2-CH3 H3C-CH2-O-CH3 HJC=CH2 LNH Answer Bank aldehyde halide amide...
##### Question 31 A rare X linked disease runs on both sides of the families from a...
Question 31 A rare X linked disease runs on both sides of the families from a prospective couple. Susana's brother as well as Joe's father have the condition. What is the probability that Susana and Joe's first child has the disease? Not yet answered Points out of 2.50 Flag question Sele... | {} |
# How to convince students of the integral identity $\int_0^af(x)dx=\int_0^af(a-x)dx$?
A common identity in integration is $\int_0^af(x)dx=\int_0^af(a-x)dx$.
The steps to prove it (algebraically, ignoring the geometric method) are as follows.
Let $u=a-x$ so $dx=-du$.
$\int_0^af(a-x)dx=-\int_a^0f(u)du=\int_0^af(u)du=\int_0^af(x)dx$
My students often aren't convinced as to how $\int_0^a f(u)du$ can just become $\int_0^a f(x)dx$ because of the relationship $u=a-x$.
My way of convincing them of this is to ask them to evaluate for me
$\int_0^a x^2 dx$
$\int_0^a y^2 dy$
$\int_0^a z^2 dz$
Sometimes this works but there have been times when they've said "Oh but that's because there's no relationship between $x$, $y$ and $z$!
Have you ever had this problem? Do you perhaps have any suggestions in this scenario?
Also, I'm not too sure what tags to attach to it, some help here would be appreciated.
• How would your students feel about making the transformation $x = u$ in the second integral, and observing that $\mathrm dx = \mathrm du$? (This seems silly to us, but I've had students seriously propose it as a possible simplifying substitution.) If they don't like re-using $x$, then trying it with $y$, $z$, $w$ (which, as calculus teachers know, is the letter after $z$), and so on may help to convince them that the variable name truly doesn't matter. (EDIT: Or you could practice making that transformation first, so the 'dummy'ness of the variable is familiar before its appearance here.) – LSpice Mar 28 '15 at 23:05
• The notation describes the area of a figure in the plane. This geometrical quantity is independent of the choice of coordinates used to describe the plane. – Ben Crowell Mar 29 '15 at 22:14
• @BenCrowell, Trogdor mentions at matheducators.stackexchange.com/questions/7694/… (and should maybe include in the body of the post …) that the students specifically ask for an algebraic justification of this fact, not just a geometric one. – LSpice Mar 30 '15 at 22:50
## Problem of sloppy notation
The notation is sloppy. Your students are justifiably confused. We've just gotten used to it.
In order to untangle this, we need the notion of free variables and bound variables. These have somewhat confusing, perhaps even counter-intuitive names. So, I will use "local" as synonymous with "bound" and "non-local" as synonymous with "free".
When we write the expression $\int_0^a f(x) dx$, the $x$ is a local (bound) variable. It does not refer to anything external to the expression. The meaning of $x$ is "bound" within the $\int$-operation. Local (bound) variables may be thought of as conveniences or placeholders. Their names do not matter. So, we could also write $\int_0^a f(v) dv$ and mean the exact same thing.
In the expression $\int_0^a f(x) dx$, the $f$ and $a$ are non-local (free). They refer to objects that need some external context. Another way to say this is that the expression is a function of $f$ and $a$, that is we could write $g(f,a) = \int_0^a f(x) dx$. It is not a function of $x$. We could write this function instead as $g(f,a) = \int_0^a f(v) dv$ and mean the same thing.
Now consider the expression put before your students.
$$\int_0^af(a-x)dx=-\int_a^0f(u)du=\int_0^af(u)du=\int_0^af(x)dx \qquad (1)$$
In equation (1) you've asked your students to consider $u$ as non-local variable and equal to $a-x$ in the exposition for the leftmost equality.On the right hand side that context has already been dropped. In the rightmost equality, students are confused that $u$ is treated as a local variable, that is not subject to the context of $u = a - x$.
Indeed, if we are allowed to let context come and go, then we could continue equation (1) as follows since $x = a - u$
$$\int_0^af(x)dx = -\int_0^af(a-u)du \qquad (2)$$
Combining (1) and (2) we have
$$\int_0^af(a-x)dx=-\int_0^af(a-u)du \qquad (3)$$
And dropping context once more to consider $u$ as local to the $\int$-expression only, we'd get
$$-\int_0^af(a-u)du=-\int_0^af(a-x)dx \qquad (4)$$
And putting our results together we have arrived at
$$\int_0^af(a-x)dx=-\int_0^af(a-x)dx \qquad (5)$$
Which of course implies that for any $f$ and $a$, $\int_0^af(a-x)dx=0$. Yikes! Your students are wise to be wary of applying and dropping context.
## Problem of missing the key transformation
Consider instead writing $u = \phi(x) = a - x$, in other words $u$ is a function. Then, develop purely with derivatives and algebraic substitution that
$$\int_0^af(a-x)dx=-\int_0^a f(\phi(x))\phi'(x) dx \qquad (6)$$
Now we often just write $u = \phi(x)$ and $du = u'(x) = \phi'(x)dx$, so we could also write (6) as
$$\int_0^af(a-x)dx=-\int_0^a f(u)du \qquad (7)$$
However, at this point we are yet to actually do integration by substitution. Going from (6) to (7) is just a change in notation! Continuing from (6), integration by substitution, which requires two applications of the fundamental theorem of calculus, allows the following
\begin{align} \int_0^af(a-x)dx &=-\int_0^a f(\phi(x))\phi'(x) dx \\ &=-\int_{\phi(0)}^{\phi(a)} f(v) dv \end{align}
where $v$ is a local variable! Writing this same development with $u$ notation instead, that is continuing equation (7), we'd have
\begin{align} \int_0^af(a-x)dx &=-\int_0^a f(u)du \\ &=-\int_{\phi(0)}^{\phi(a)} f(v)dv \end{align}
So, where it looks like we've simply swapped names for $u$ and $v$ and changed the limits, we actually applied the fundamental theorem of calculus twice! That is what justifies the apparent dropping of context, and it is no trivial result.
No since $x$ and $v$ are local variables (bound within their own $\int$-expressions), we can now rename those as we like, such as naming $v$ as $x$ to get
\begin{align} \int_0^af(a-x)dx &=-\int_0^a f(u)du \\ &=-\int_{\phi(0)}^{\phi(a)} f(v)dv \qquad \text{by integration by substitution} \\ &=\int_0^a f(v) dv =\int_0^a f(x) dx \end{align}
• +1 simply for the concept that your students having trouble with something may say more about you/the maths than your students – DavidButlerUofA Mar 25 '15 at 18:49
• I agree with the importance of explaining the local and non-local variables (I too like those terms better, they fit well with computer science classes student can have in parallel); but I also think that we should teach student to write change of variables in the usual, however sloppy way. They swill meet it everywhere, in physics notably, and it is very convenient for computations. – Benoît Kloeckner Mar 25 '15 at 21:38
• The notation is confusing, but not sloppy. Besides, I can't agree with your description of $(1)$, if $u$ was a non-local variable, then you could not use $\int \bullet\ \mathrm{d}u$. The fact that $u$ appears next to $\mathrm{d}$ means that it is a local variable, and it is not equal to $a-x$, it just happens to range over the same set of values. – dtldarek Mar 31 '15 at 10:10
• @dtldarek You are right. I should attempt a reword. I meant that $u = a -x$ was defined in the exposition for the leftmost equality, but yes that context is dropped already on the right hand side. This is the step we've all gotten used to, but confused students might be helped by doing two steps -- a change of variables, then integration by substitution with a unique variable name. – A. Webb Mar 31 '15 at 12:12
This answer attacks not only this problem, but a lot of all others. At the expense of going against the grain, however.
A much deeper issue is this permanent grip on the concept of 'function of a variable' which I've described in the past as pedagogical cancer. There's no such thing as the function $f(x)$ (unless $f$ is a functional, but that's a different matter), the function is $f$. If one wants to talk about $f$ evaluated at generic value, one takes a variable name, say $x$ (or $u$) and one writes $f(x)$ (or $f(u)$) and it doesn't matter what variable one chooses. Then some properties are proven and $\forall x(\text{something})$ is concluded, which is equivalent to $\forall u(\text{something})$, the variables are bound.
To avoid the present problem, the one linked above and many others, rid the students of this inadequate concept.
The symbol $\small{\displaystyle \int\limits_a^b f(x)\,\mathrm dx}$ is just another way of writing $\small{\displaystyle \int\limits_a^b f}$ and this is true for $\small{\displaystyle \int\limits_a^b f(u)\,\mathrm du, \int\limits_a^b f(z)\,\mathrm dz, \int\limits_a^b f(\text{cancer})\,\mathrm d\text{cancer}}$, etc.
And if certain requirements are met, integration by substitution says that $$\displaystyle \int\limits_a^b f=\int \limits_{u^{-1}(a)}^{u^{-1}(b)}\left(f\circ u\right) u'.$$
Where's your $x$ now? In doing this it's not so much that the students answer the question themselves, it's more that the question will not even arise.
• This is mathematically unassailable, but I have my doubts as to whether it would make sense to the students Trogdor describes in the OP. – mweiss Mar 25 '15 at 13:53
• @Aeryk No. One of the main points of GitGud's answer is that one doesn't need the $d$'s and one is better off without them. – Andreas Blass Mar 25 '15 at 17:43
• @mweiss I share the same concern. If they are high school students, I might abandon my idea. But beyond high school, I think it is a terrible idea to learn/teach maths without first introducing the students to informal logic, proof strategies, etc (How to Prove It, basically), including the concept of bound variable. Having this concept is enough to make sense of what I suggest, it's not necessary to go through all basic informal logic. – Git Gud Mar 25 '15 at 22:58
• I think you could just say that the notation $\int_a^b f$ is defined to be the same thing as $\int_a^b f(x) dx$ is defined to be without need to reference the technical details of either definition. In other words, we mean the same thing by $\int_a^b f$ because that's what we've agreed for it to mean, a notational convenience. – A. Webb Mar 26 '15 at 17:16
• @GitGud I really thought is was $\int_{a}^{b}f = \int_{u^{-1}(a)}^{u^{-1}(b)} (f\circ u)(u^{\prime})$ because, for a primitive $F$ of $f,$ the left-hand-side gives $F(b)-F(a)$ whereas the right-hand-side gives $F(u(u^{-1}(b))) - F(u(u^{-1}(a))).$ Am I not thinking straight? I find I get muddle with the substitution formula when proving/thinking about it formally. – Shai Apr 1 '15 at 21:04
I would ask students to consider what the graphs of $f(x)$ and $f(a-x)$ look like (and how they are related to each other) on the interval $[0,a]$.
Draw a sketch of some arbitrary-looking function on $[0,a]$ and label it $f(x)$. Now we want to figure out what $g(x)=f(a-x)$ looks like. By direct computation, $g(0)=f(a)$ and $g(a)=f(0)$. Once you have those, it is not hard to realize that the graph of $g(x)$ is just a mirror-reversal (left-to-right) of the graph of $f(x)$. Draw that sketch and label it $f(a-x)$. (At this point it probably wouldn't hurt to remind students of their prior experience with function transformations.)
Once they see that the graph of $f(a-x)$ on $[0,a]$ is simply a mirror-reversal of the graph of $f(x)$ on the same interval, it should be obvious why the areas under the two curves are equal.
• This is the way I teach it to them if I were to take a geometric approach. But many have asked me for a purely algebraic approach, in which I have the above problem. – Trogdor Mar 25 '15 at 2:55
• So is the issue purely the replacement of one dummy variable with another? – mweiss Mar 25 '15 at 2:58
• That is exactly the problem. The students keep thinking that because the two dummy variables have a relationship, then we cannot just turn one to the other like that. – Trogdor Mar 25 '15 at 2:59
• Okay, I think I have a clearer sense now of what you are asking. I think the issue has to do with a lack of understanding of just what "dummy variables" mean in a definite integral. I will give some thought to this and try to come up with another answer. – mweiss Mar 25 '15 at 3:02
• @Trogdor Symmetry is not constrained to geometry, it is a nice, intuitive explanation, but we are not obliged to use it. Even with an algebraic approach the symmetry works just as well. If you don't want to interpret the integral as the "area under a curve", then you can use just the definition, reflect all the points/sets/intervals or whatever you are using and get exact equality (i.e. for each element of converging sequence there is a reflected one with exactly same value). – dtldarek Mar 31 '15 at 9:59
Consider $\int f(a-x) dx$. Let $F(x)$ be an antiderivative of $f(x)$. Write $u=a-x$, which means $du = -dx$ and $-du = dx$. Then \begin{align} \int f(a-x) dx &= \int -f(u)du \\ &= -F(u)+C \\ &= -F(a-x) + C. \end{align} Thus $-F(a-x)$ is an antiderivative of $f(a-x)$.
So \begin{align} \int_0^a f(a-x)dx &= \big[-F(a-x)\big]_0^a \\ &= -F(a-a) - - F(a-0) \\ &= F(a)-F(0). \end{align} Also \begin{align} \int_0^a f(x)dx &= \big[F(x)\big]_0^a \\ &= F(a) - F(0). \end{align} So the two definite integrals are the same.
I think the point you're trying to make is that it doesn't matter what the variable is, even if you've mentioned it already, because it will eventually be replaced with proper numbers later. And therein lies the issue, I think: many students don't behave as if they realise that the definite integral is a number. To say that $\int_a^b f(x) dx = \int_a^b f(u)du$ is to say that these two numbers are equal. Actually performing the calculation in some way to have the numbers themselves may go some way to convincing them of that.
In $\int_0^af(x)\,dx$, the inputs to $f$ begin at $0$ and progress to $a$.
In $\int_0^af(a-x)\,dx$, the inputs to $f$ begin at $a$ and progress to $0$.
So when considering Riemann sums with uniform-width bases and using midpoints for test points, the two collections of heights of the rectangles in the expressions are the same.
• Just let Riemann out of this. Every time a student hears name of a mathematician he stops thinking in anticipation of complexities. Simply tell them that the road from $0$ to $a$ is the same as from $a$ to $0$ independent on where you start provided the $f$ is the same. – Thinkeye Mar 26 '15 at 16:30
• @Thinkeye You can leave out the name, but I'd never leave out the idea that an integral is a sum of rectangle bases ($dx$) with rectangle heights ($f(x)$). And here, nothing changes with the bases, and the only "change" with the heights is reordering. – alex.jordan Mar 26 '15 at 17:41
• You are absolutely right, it's the idea what counts. I only wanted to say that you are better off first explaining the idea and adding the name of the author to students later. After they have absorbed the concept, the name doesn't scare them anymore. – Thinkeye Mar 27 '15 at 11:31 | {} |
Definitions
Landau-Kolmogorov inequality
In mathematics, the Landau-Kolmogorov inequality is an inequality between different derivatives of a function. There are many inequalities holding this name (sometimes they are also called Kolmogorov type inequalities), common formula is
$|f^\left\{\left(k\right)\right\}|_\left\{L_q\left(T\right)\right\} le K cdot |f|^alpha_\left\{L_p\left(T\right)\right\} cdot |f^\left\{\left(n\right)\right\}|^\left\{1-alpha\right\}_\left\{L_r\left(T\right)\right\}$, where $\left(1le k < n\right).$
Here all three norms can be different from each other (from $scriptstyle L_1$ to $scriptstyle L_infty$), giving different inequalities. These inequalities also can be written for function spaces on axis, semiaxis or closed segment (it is denoted by T)—it also gives bunch of different inequalities. These inequalities are still intensively studied. Most honourable results are those where exact value of minimal constant $scriptstyle K=K_T\left(n,k,q,p,r\right)$ is found.
History
An inequality of this type was firstly established by Hardy, Littlewood and Pólya. Some exact constants ($scriptstyle K_\left\{R_+\right\}\left(2, 1, infty, infty, infty\right) = 2$) were found by Landau in
E. Landau, “Ungleichungen für zweimal differenzierbare Funktionen”, Proc. London Math. Soc., 13 (1913), 43–49.
Kolmogorov obtained one of the most outstanding results in this field, with all three norms equal to $scriptstyle L_infty$, he found $scriptstyle K_\left\{R\right\}\left(n, k, infty, infty, infty\right)$.
Search another word or see Landau-Kolmogorov inequalityon Dictionary | Thesaurus |Spanish | {} |
# zbMATH — the first resource for mathematics
## Wu, Jianzhuan
Compute Distance To:
Author ID: wu.jianzhuan Published as: Wu, J.; Wu, J. Z.; Wu, J.-Z.; Wu, Jianzhuan
Documents Indexed: 82 Publications since 1986, including 1 Book
all top 5
#### Co-Authors
0 single-authored 8 Lin, Wensong 2 Lam, Peter Che Bor 2 Song, Zengmin 1 Gu, Guohua 1 Xu, Kexiang 1 Yin, Xiang
all top 5
#### Serials
4 Journal of Southeast University. English Edition 2 Journal of Combinatorial Optimization 1 Discrete Applied Mathematics 1 Discrete Mathematics 1 Ars Combinatoria 1 Journal of Nanjing University. Mathematical Biquarterly 1 Taiwanese Journal of Mathematics
#### Fields
10 Combinatorics (05-XX) 1 Operations research, mathematical programming (90-XX)
#### Citations contained in zbMATH Open
53 Publications have been cited 467 times in 443 Documents Cited by Year
A new approach to the development of automatic quadrilateral mesh generation. Zbl 0755.65118
Zhu, J. Z.; Zienkiewicz, O. C.; Hinton, E.; Wu, J.
1991
Incompressibility without tears – how to avoid restrictions of mixed formulation. Zbl 0756.76056
Zienkiewicz, O. C.; Wu, J.
1991
Error estimation and adaptivity in Navier-Stokes incompressible flows. Zbl 0699.76035
Wu, J.; Zhu, J. Z.; Szmelter, J.; Zienkiewicz, O. C.
1990
Transition in wall-bounded flows. Zbl 1146.76601
Lee, C. B.; Wu, J. Z.
2008
Integral force acting on a body due to local flow structures. Zbl 1110.76016
Wu, J.-Z.; Lu, X.-Y.; Zhuang, L.-X.
2007
Superconvergent patch recovery techniques — some further tests. Zbl 0778.73079
Zienkiewicz, O. C.; Zhu, J. Z.; Wu, J.
1993
Particulate flow simulation via a boundary condition-enforced immersed boundary-lattice Boltzmann scheme. Zbl 1364.76193
Wu, J.; Shu, C.
2010
Automatic directional refinement in adaptive analysis of compressible flows. Zbl 0810.76045
Zienkiewicz, O. C.; Wu, J.
1994
Boundary condition-enforced immersed boundary method for thermal flow problems with Dirichlet temperature condition and its applications. Zbl 1365.76199
Ren, W. W.; Shu, C.; Wu, J.; Yang, W. M.
2012
Several parameters of generalized Mycielskians. Zbl 1093.05050
Lin, Wensong; Wu, Jianzhuan; Lam, Peter Che Bor; Gu, Guohua
2006
Effective vorticity-velocity formulations for three-dimensional incompressible viscous flows. Zbl 0835.76018
Wu, X. H.; Wu, J. Z.; Wu, J. M.
1995
A convergence theorem for the fuzzy subspace clustering (FSC) algorithm. Zbl 1134.68488
Gan, G.; Wu, J.
2008
The strong chromatic index of a class of graphs. Zbl 1214.05033
Wu, Jianzhuan; Lin, Wensong
2008
$$L (j, k)$$- and circular $$L(j, k)$$-labellings for the products of complete graphs. Zbl 1132.05053
Lam, Peter Che Bor; Lin, Wensong; Wu, Jianzhuan
2007
Turbulent drag reduction by traveling wave of flexible wall. Zbl 1060.76563
Zhao, H.; Wu, J.-Z.; Luo, J.-S.
2004
A simple distribution function-based gas-kinetic scheme for simulation of viscous incompressible and compressible flows. Zbl 1351.76257
Yang, L. M.; Shu, C.; Wu, J.
2014
Steady-state response of the parametrically excited axially moving string constituted by the Boltzmann superposition principle. Zbl 1064.74086
Chen, L.-Q.; Zu, J. W.; Wu, J.
2003
A general explicit or semi-explicit algorithm for compressible and incompressible flows. Zbl 0764.76040
Zienkiewicz, O. C.; Wu, J.
1992
A boundary condition-enforced immersed boundary method for compressible viscous flows. Zbl 1390.76504
Qiu, Y. L.; Shu, C.; Wu, J.; Sun, Y.; Yang, L. M.; Guo, T. Q.
2016
Axial stretching and vortex definition. Zbl 1187.76570
Wu, J.-Z.; Xiong, A.-K.; Yang, Y.-T.
2005
Distance two edge labelings of lattices. Zbl 1273.90229
Lin, Wensong; Wu, Jianzhuan
2013
Boundary vorticity dynamics since Lighthill’s 1963 article: Review and development. Zbl 0913.76030
Wu, J. Z.; Wu, J. M.
1998
Basis-spline collocation method for the lattice solution of boundary value problems. Zbl 0724.65105
Umar, A. S.; Wu, J.; Strayer, M. R.; Bottcher, C.
1991
A three-dimensional explicit sphere function-based gas-kinetic flux solver for simulation of inviscid compressible flows. Zbl 1349.76751
Yang, L. M.; Shu, C.; Wu, J.
2015
The average exponent of elliptic curves modulo $$p$$. Zbl 1384.11072
Wu, J.
2014
On the sum of exponential divisors of an integer. Zbl 0902.11037
Pétermann, Y.-F. S.; Wu, J.
1997
Helical-wave decomposition and applications to channel turbulence with streamwise rotation. Zbl 1205.76153
Yang, Y.-T.; Su, W.-D.; Wu, J.-Z.
2010
Circular chromatic numbers and fractional chromatic numbers of distance graphs with distance sets missing an interval. Zbl 1093.05026
Wu, Jianzhuan; Lin, Wensong
2004
Review of the physics of enhancing vortex lift by unsteady excitation. Zbl 0850.76112
Wu, J. Z.; Vakili, A. D.; Wu, J. M.
1991
A stabilized complementarity formulation for nonlinear analysis of 3D bimodular materials. Zbl 1348.74030
Zhang, L.; Zhang, H. W.; Wu, J.; Yan, B.
2016
On a general theory for compressing process and aeroacoustics: linear analysis. Zbl 1269.76108
Mao, F.; Shi, Y. P.; Wu, J. Z.
2010
The physical origin of severe low-frequency pressure fluctuations in giant Francis turbines. Zbl 1137.76392
Zhang, R.-K.; Cai, Q.-D.; Wu, J.-Z.; Wu, Y.-L.; Liu, S.-H.; Zhang, L.
2005
Identification of sites for road accident remedial work by Bayesian statistical methods: An example of uncertain inference. Zbl 0984.90502
Heydecker, B. G.; Wu, J.
2001
Lift and drag in two-dimensional steady viscous and compressible flow. Zbl 1382.76149
Liu, L. Q.; Zhu, J. Y.; Wu, J. Z.
2015
On circular-$$L$$(2,1)-edge-labeling of graphs. Zbl 1259.05147
Lin, Wensong; Wu, Jianzhuan
2012
Steady vortex force theory and slender-wing flow diagnosis. Zbl 1202.76036
Yang, Y. T.; Zhang, R. K.; An, Y. R.; Wu, J. Z.
2007
Fluid kinematics on a deformable surface. Zbl 1082.76034
Wu, J.-Z.; Yang, Y.-T.; Luo, Y.-B.; Pozrikidis, C.
2005
Second order nonlinear spatial stability analysis of compressible mixing layers. Zbl 1185.76216
2002
A vorticity dynamics theory of three-dimensional flow separation. Zbl 1184.76595
Wu, J. Z.; Tramel, R. W.; Zhu, F. L.; Yin, X. Y.
2000
Dynamics of dual-particles settling under gravity. Zbl 1121.76482
Wu, J.; Manasseh, R.
1998
Modeling dynamic contraction of muscle using the cross-bridge theory. Zbl 0880.92005
Wu, J. Z.; Herzog, W.; Cole, G. K.
1997
Vorticity dynamics on boundaries. Zbl 0870.76068
Wu, J. Z.; Wu, J. M.
1996
A new elasto-plastic constitutive model inserted into the user-supplied material model of ADINA. Zbl 1068.74608
Ellyin, F.; Xia, Z.; Wu, J.
1995
Streaming vorticity flux from oscillating walls with finite amplitude. Zbl 0781.76030
Wu, J. Z.; Wu, X. H.; Wu, J. M.
1993
On a sum involving the Euler totient function. Zbl 1446.11178
Wu, J.
2019
Longitudinal-transverse aerodynamic force in viscous compressible complex flow. Zbl 1327.76085
Liu, L. Q.; Shi, Y. P.; Zhu, J. Y.; Su, W. D.; Zou, S. F.; Wu, J. Z.
2014
A new implementation of the numerical manifold method (NMM) for the modeling of non-collinear and intersecting cracks. Zbl 1356.74248
Cai, Y. C.; Wu, J.; Atluri, S. N.
2013
Stability and vibrations of an all-terrain vehicle subjected to nonlinear structural deformation and resistance. Zbl 1111.34037
Dai, L.; Wu, J.
2007
A numerical method of moments for solute transport in physically and chemically nonstationary formations: linear equilibrium sorption with random $$K_d$$. Zbl 1145.86302
Hu, B. X.; Wu, J.; Zhang, D.
2004
Crack-tip field of a supersonic bimaterial interface crack. Zbl 1110.74759
Wu, J.
2002
On some asymptotic formulae of Ramanujan. Zbl 0999.11059
Pétermann, Y.-F. S.; Wu, J.
2002
A hybrid method for simulation of axial flow impeller driven mixing vessels. Zbl 1093.76545
Blackburn, H. M.; Elston, J. R.; Niclasen, D. A.; Rudman, M.; Wu, J.
2000
Conical turbulent swirling vortex with variable eddy viscosity. Zbl 0616.76076
Wu, J. Z.
1986
On a sum involving the Euler totient function. Zbl 1446.11178
Wu, J.
2019
A boundary condition-enforced immersed boundary method for compressible viscous flows. Zbl 1390.76504
Qiu, Y. L.; Shu, C.; Wu, J.; Sun, Y.; Yang, L. M.; Guo, T. Q.
2016
A stabilized complementarity formulation for nonlinear analysis of 3D bimodular materials. Zbl 1348.74030
Zhang, L.; Zhang, H. W.; Wu, J.; Yan, B.
2016
A three-dimensional explicit sphere function-based gas-kinetic flux solver for simulation of inviscid compressible flows. Zbl 1349.76751
Yang, L. M.; Shu, C.; Wu, J.
2015
Lift and drag in two-dimensional steady viscous and compressible flow. Zbl 1382.76149
Liu, L. Q.; Zhu, J. Y.; Wu, J. Z.
2015
A simple distribution function-based gas-kinetic scheme for simulation of viscous incompressible and compressible flows. Zbl 1351.76257
Yang, L. M.; Shu, C.; Wu, J.
2014
The average exponent of elliptic curves modulo $$p$$. Zbl 1384.11072
Wu, J.
2014
Longitudinal-transverse aerodynamic force in viscous compressible complex flow. Zbl 1327.76085
Liu, L. Q.; Shi, Y. P.; Zhu, J. Y.; Su, W. D.; Zou, S. F.; Wu, J. Z.
2014
Distance two edge labelings of lattices. Zbl 1273.90229
Lin, Wensong; Wu, Jianzhuan
2013
A new implementation of the numerical manifold method (NMM) for the modeling of non-collinear and intersecting cracks. Zbl 1356.74248
Cai, Y. C.; Wu, J.; Atluri, S. N.
2013
Boundary condition-enforced immersed boundary method for thermal flow problems with Dirichlet temperature condition and its applications. Zbl 1365.76199
Ren, W. W.; Shu, C.; Wu, J.; Yang, W. M.
2012
On circular-$$L$$(2,1)-edge-labeling of graphs. Zbl 1259.05147
Lin, Wensong; Wu, Jianzhuan
2012
Particulate flow simulation via a boundary condition-enforced immersed boundary-lattice Boltzmann scheme. Zbl 1364.76193
Wu, J.; Shu, C.
2010
Helical-wave decomposition and applications to channel turbulence with streamwise rotation. Zbl 1205.76153
Yang, Y.-T.; Su, W.-D.; Wu, J.-Z.
2010
On a general theory for compressing process and aeroacoustics: linear analysis. Zbl 1269.76108
Mao, F.; Shi, Y. P.; Wu, J. Z.
2010
Transition in wall-bounded flows. Zbl 1146.76601
Lee, C. B.; Wu, J. Z.
2008
A convergence theorem for the fuzzy subspace clustering (FSC) algorithm. Zbl 1134.68488
Gan, G.; Wu, J.
2008
The strong chromatic index of a class of graphs. Zbl 1214.05033
Wu, Jianzhuan; Lin, Wensong
2008
Integral force acting on a body due to local flow structures. Zbl 1110.76016
Wu, J.-Z.; Lu, X.-Y.; Zhuang, L.-X.
2007
$$L (j, k)$$- and circular $$L(j, k)$$-labellings for the products of complete graphs. Zbl 1132.05053
Lam, Peter Che Bor; Lin, Wensong; Wu, Jianzhuan
2007
Steady vortex force theory and slender-wing flow diagnosis. Zbl 1202.76036
Yang, Y. T.; Zhang, R. K.; An, Y. R.; Wu, J. Z.
2007
Stability and vibrations of an all-terrain vehicle subjected to nonlinear structural deformation and resistance. Zbl 1111.34037
Dai, L.; Wu, J.
2007
Several parameters of generalized Mycielskians. Zbl 1093.05050
Lin, Wensong; Wu, Jianzhuan; Lam, Peter Che Bor; Gu, Guohua
2006
Axial stretching and vortex definition. Zbl 1187.76570
Wu, J.-Z.; Xiong, A.-K.; Yang, Y.-T.
2005
The physical origin of severe low-frequency pressure fluctuations in giant Francis turbines. Zbl 1137.76392
Zhang, R.-K.; Cai, Q.-D.; Wu, J.-Z.; Wu, Y.-L.; Liu, S.-H.; Zhang, L.
2005
Fluid kinematics on a deformable surface. Zbl 1082.76034
Wu, J.-Z.; Yang, Y.-T.; Luo, Y.-B.; Pozrikidis, C.
2005
Turbulent drag reduction by traveling wave of flexible wall. Zbl 1060.76563
Zhao, H.; Wu, J.-Z.; Luo, J.-S.
2004
Circular chromatic numbers and fractional chromatic numbers of distance graphs with distance sets missing an interval. Zbl 1093.05026
Wu, Jianzhuan; Lin, Wensong
2004
A numerical method of moments for solute transport in physically and chemically nonstationary formations: linear equilibrium sorption with random $$K_d$$. Zbl 1145.86302
Hu, B. X.; Wu, J.; Zhang, D.
2004
Steady-state response of the parametrically excited axially moving string constituted by the Boltzmann superposition principle. Zbl 1064.74086
Chen, L.-Q.; Zu, J. W.; Wu, J.
2003
Second order nonlinear spatial stability analysis of compressible mixing layers. Zbl 1185.76216
2002
Crack-tip field of a supersonic bimaterial interface crack. Zbl 1110.74759
Wu, J.
2002
On some asymptotic formulae of Ramanujan. Zbl 0999.11059
Pétermann, Y.-F. S.; Wu, J.
2002
Identification of sites for road accident remedial work by Bayesian statistical methods: An example of uncertain inference. Zbl 0984.90502
Heydecker, B. G.; Wu, J.
2001
A vorticity dynamics theory of three-dimensional flow separation. Zbl 1184.76595
Wu, J. Z.; Tramel, R. W.; Zhu, F. L.; Yin, X. Y.
2000
A hybrid method for simulation of axial flow impeller driven mixing vessels. Zbl 1093.76545
Blackburn, H. M.; Elston, J. R.; Niclasen, D. A.; Rudman, M.; Wu, J.
2000
Boundary vorticity dynamics since Lighthill’s 1963 article: Review and development. Zbl 0913.76030
Wu, J. Z.; Wu, J. M.
1998
Dynamics of dual-particles settling under gravity. Zbl 1121.76482
Wu, J.; Manasseh, R.
1998
On the sum of exponential divisors of an integer. Zbl 0902.11037
Pétermann, Y.-F. S.; Wu, J.
1997
Modeling dynamic contraction of muscle using the cross-bridge theory. Zbl 0880.92005
Wu, J. Z.; Herzog, W.; Cole, G. K.
1997
Vorticity dynamics on boundaries. Zbl 0870.76068
Wu, J. Z.; Wu, J. M.
1996
Effective vorticity-velocity formulations for three-dimensional incompressible viscous flows. Zbl 0835.76018
Wu, X. H.; Wu, J. Z.; Wu, J. M.
1995
A new elasto-plastic constitutive model inserted into the user-supplied material model of ADINA. Zbl 1068.74608
Ellyin, F.; Xia, Z.; Wu, J.
1995
Automatic directional refinement in adaptive analysis of compressible flows. Zbl 0810.76045
Zienkiewicz, O. C.; Wu, J.
1994
Superconvergent patch recovery techniques — some further tests. Zbl 0778.73079
Zienkiewicz, O. C.; Zhu, J. Z.; Wu, J.
1993
Streaming vorticity flux from oscillating walls with finite amplitude. Zbl 0781.76030
Wu, J. Z.; Wu, X. H.; Wu, J. M.
1993
A general explicit or semi-explicit algorithm for compressible and incompressible flows. Zbl 0764.76040
Zienkiewicz, O. C.; Wu, J.
1992
A new approach to the development of automatic quadrilateral mesh generation. Zbl 0755.65118
Zhu, J. Z.; Zienkiewicz, O. C.; Hinton, E.; Wu, J.
1991
Incompressibility without tears – how to avoid restrictions of mixed formulation. Zbl 0756.76056
Zienkiewicz, O. C.; Wu, J.
1991
Basis-spline collocation method for the lattice solution of boundary value problems. Zbl 0724.65105
Umar, A. S.; Wu, J.; Strayer, M. R.; Bottcher, C.
1991
Review of the physics of enhancing vortex lift by unsteady excitation. Zbl 0850.76112
Wu, J. Z.; Vakili, A. D.; Wu, J. M.
1991
Error estimation and adaptivity in Navier-Stokes incompressible flows. Zbl 0699.76035
Wu, J.; Zhu, J. Z.; Szmelter, J.; Zienkiewicz, O. C.
1990
Conical turbulent swirling vortex with variable eddy viscosity. Zbl 0616.76076
Wu, J. Z.
1986
all top 5
#### Cited by 952 Authors
12 Lin, Wensong 10 Zienkiewicz, Olgierd Cecil 9 Shu, Chang 8 Lu, Xiyun 8 Yang, Liming 7 Pastor, Manuel 7 Wang, Yan 6 Chen, Liqun 6 Chen, Shiyi 6 Oñate Ibáñez de Navarra, Eugenio 5 Cai, Qingdong 5 Deng, Zhaohong 5 Goddard, Anthony J. H. 5 Kim, Byeong Moon 5 Lee, Cunbiao 5 Pain, Christopher C. 5 Schröder, Wolfgang Armin 5 Smolarkiewicz, Piotr K. 5 Song, Byung Chul 5 Wang, Shitong 4 Chung, Fu-Lai 4 de Sampaio, Paulo A. B. 4 Graham, Michael D. 4 Meysonnat, Pascal S. 4 Nithiarasu, Perumal 4 Rebholz, Leo G. 4 Wang, Jun 4 Wu, Jie 4 Wu, Jiezhi 4 Yu, Gexin 3 Alauzet, Frédéric 3 Apel, Thomas 3 Batra, Romesh C. 3 Chen, Liwei 3 Choi, Kup-Sze 3 Cruchaga, Marcela A. 3 de Oliveira, Cassiano R. E. 3 Ding, Hang 3 Dolejší, Vít 3 Guo, Tongqing 3 Hauke, Guillermo 3 Huerta, Antonio 3 Linden, Paul F. 3 Liu, Daphne Der-Fen 3 Liu, Haoran 3 Lu, Zhiliang 3 Naduvath, Sudev 3 Olshanskii, Maxim A. 3 Pan, Liang 3 Quadrio, Maurizio 3 Rho, Yoomi 3 Shiu, Waichee 3 Shu, Shi 3 Smith, C. Ray 3 Sun, Yu 3 Szmelter, Joanna 3 Tian, Fangbao 3 Umpleby, A. P. 3 Wu, Qiong 3 Xu, Kun 3 Yuan, Haizhuan 3 Zhang, Wei 3 Zhang, Yongjie 3 Zhou, Xiangqian 2 Aguilar, Juan C. 2 Albelda, José 2 Areias, Pedro M. A. 2 Bialecki, Bernard 2 Bugeda, Gabriel 2 Chang, Gerard Jennhwa 2 Chen, Lily 2 Chithra, Kaithavalappil 2 Choi, Hyoung Gwon 2 Choi, Jae-Boong 2 Codina, Ramon 2 Cojocaru, Alina Carmen 2 Coutinho, Alvaro L. G. A. 2 Dai, Benqiu 2 Darbandi, Masoud 2 Dash, Sunil Manohar 2 De, Ashoke 2 Delouei, A. Amiri 2 Doweidar, Mohamed Hamdy 2 Eyink, Gregory L. 2 Fairweather, Graeme 2 Fang, Fangxin 2 Feldman, Yuri 2 Frey, Pascal Jean 2 Fuenmayor, Francisco Javier 2 Fuster, Daniel 2 Germina, K. A. 2 Goodman, Jonathan B. 2 Gorman, Gerard J. 2 Guo, Zhaoli 2 Gupta, Akshat 2 He, Dan 2 Huang, Mingfang 2 Hwang, Woonjae 2 Jayathilake, P. G. 2 Jiang, Yizhang ...and 852 more Authors
all top 5
#### Cited in 108 Serials
49 Journal of Fluid Mechanics 46 Computer Methods in Applied Mechanics and Engineering 43 Journal of Computational Physics 30 Computers and Fluids 22 International Journal for Numerical Methods in Engineering 18 Communications in Numerical Methods in Engineering 12 Physics of Fluids 11 International Journal for Numerical Methods in Fluids 11 Journal of Combinatorial Optimization 9 Computational Mechanics 9 Engineering Computations 8 Computers & Mathematics with Applications 7 Discrete Mathematics 7 Applied Mathematics and Computation 6 European Journal of Mechanics. B. Fluids 5 Applied Numerical Mathematics 5 Applied Mathematical Modelling 5 Pattern Recognition 5 AMM. Applied Mathematics and Mechanics. (English Edition) 4 Journal of Number Theory 4 Mathematical Problems in Engineering 4 Acta Mechanica Sinica 3 Information Processing Letters 3 Chaos, Solitons and Fractals 3 Information Sciences 3 Journal of Computational and Applied Mathematics 3 European Journal of Combinatorics 3 Applied Mathematics and Mechanics. (English Edition) 3 International Journal of Computer Mathematics 3 Discussiones Mathematicae. Graph Theory 3 International Journal of Computational Fluid Dynamics 3 Discrete Mathematics, Algorithms and Applications 2 Modern Physics Letters B 2 Discrete Applied Mathematics 2 Journal of Engineering Mathematics 2 Mathematical Biosciences 2 Mathematics and Computers in Simulation 2 Computer Aided Geometric Design 2 Mathematical and Computer Modelling 2 International Journal of Foundations of Computer Science 2 International Journal of Numerical Methods for Heat & Fluid Flow 2 The Electronic Journal of Combinatorics 2 Engineering Analysis with Boundary Elements 2 European Journal of Mechanics. A. Solids 2 Journal of Theoretical Biology 1 International Journal of Modern Physics B 1 Acta Mechanica 1 Fluid Dynamics 1 International Journal of Engineering Science 1 Israel Journal of Mathematics 1 Journal of Applied Mathematics and Mechanics 1 Journal of the Mechanics and Physics of Solids 1 Physica A 1 Ukrainian Mathematical Journal 1 Physics of Fluids, A 1 Mathematics of Computation 1 Theoretical and Computational Fluid Dynamics 1 Acta Arithmetica 1 Functiones et Approximatio. Commentarii Mathematici 1 Fuzzy Sets and Systems 1 Journal of Differential Equations 1 Meccanica 1 Tokyo Journal of Mathematics 1 Transactions of the American Mathematical Society 1 Graphs and Combinatorics 1 Journal of Computer Science and Technology 1 International Journal of Computational Geometry & Applications 1 Applications of Mathematics 1 Numerical Algorithms 1 Aequationes Mathematicae 1 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 1 Chinese Science Bulletin 1 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 1 SIAM Journal on Scientific Computing 1 Applied Mathematics. Series B (English Edition) 1 Journal de Théorie des Nombres de Bordeaux 1 Finite Fields and their Applications 1 Applied Mathematical Finance 1 Discrete and Continuous Dynamical Systems 1 Nonlinear Dynamics 1 Abstract and Applied Analysis 1 Soft Computing 1 PAA. Pattern Analysis and Applications 1 Proceedings of the Royal Society of London. Series A. Mathematical, Physical and Engineering Sciences 1 Journal of Mathematical Fluid Mechanics 1 Flow, Turbulence and Combustion 1 M2AN. Mathematical Modelling and Numerical Analysis. ESAIM, European Series in Applied and Industrial Mathematics 1 Communications in Nonlinear Science and Numerical Simulation 1 Computational Geosciences 1 International Journal of Nonlinear Sciences and Numerical Simulation 1 Journal of Turbulence 1 Archives of Computational Methods in Engineering 1 Statistical Modelling 1 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 1 Sādhanā 1 Journal of Intelligent and Fuzzy Systems 1 Algebra and Discrete Mathematics 1 South East Asian Journal of Mathematics and Mathematical Sciences 1 AKCE International Journal of Graphs and Combinatorics 1 Networks and Spatial Economics ...and 8 more Serials
all top 5
#### Cited in 26 Fields
239 Fluid mechanics (76-XX) 113 Numerical analysis (65-XX) 83 Mechanics of deformable solids (74-XX) 48 Combinatorics (05-XX) 20 Computer science (68-XX) 19 Partial differential equations (35-XX) 14 Number theory (11-XX) 12 Biology and other natural sciences (92-XX) 10 Statistics (62-XX) 9 Classical thermodynamics, heat transfer (80-XX) 8 Geophysics (86-XX) 8 Operations research, mathematical programming (90-XX) 6 Mechanics of particles and systems (70-XX) 4 Convex and discrete geometry (52-XX) 3 Dynamical systems and ergodic theory (37-XX) 3 Approximations and expansions (41-XX) 3 Statistical mechanics, structure of matter (82-XX) 2 Calculus of variations and optimal control; optimization (49-XX) 2 Differential geometry (53-XX) 2 Optics, electromagnetic theory (78-XX) 2 Quantum theory (81-XX) 2 Information and communication theory, circuits (94-XX) 1 Algebraic geometry (14-XX) 1 Integral equations (45-XX) 1 Global analysis, analysis on manifolds (58-XX) 1 Game theory, economics, finance, and other social and behavioral sciences (91-XX) | {} |
# Calculating torque required to rotate a platform
I am trying to spec out a motor's torque required to rotate a platform about an axis.
This diagram makes it better to understand(it's like a rotisserie):
Dimensions are in cm:
I have an arbitrary shaped object on the platform that I simplified to a cuboid to calculate the moment of inertia.
The combined weight is about 25kg.
I use the moment of inertia formula for a cuboid along the length axis and use the parallel axis theorem to move the axis to where I have the rod, I get:
I = (1/2) * m * (w^2 + h^2) + m*((0.5*h)^2)
= (1/2) * 25kg * (0.4^2+ 0.5^2) + 25 * (0.2^2)
= 6.125 kg-m^2
Assuming I want to reach 5 rpm in 5 seconds. I have
5rpm = (2 * pi * 5)/60 rad/s
alpha = ((2 * pi * 5)/60 - 0)/(5-0) rad/s^2
T = I * alpha = 6.125 * (pi/30) = 0.64N-m
Now I am not entirely sure if this calculation is correct. I had a 5N-m rated dc motor lying around and I fit it to the platform. The motor was able to rotate the platform about 45 degree clockwise but was not able to come back to zero degrees. Am I missing something in the above calculation? Gravity doesn't feature in my equations.
There could be other factors like friction, or the gearbox in the motor?
In addition to the torque required to turn, you do also need to account for the torque required to hold the load against gravity, which could be significant.
Consider your system in terms of the rod being your arm, held out in front of you, and the box being a yard brush held at the end of the handle. Starting with the brush pointing up, it requires little effort to hold in position. Similarly you can swing it through 360 degrees with only a little more effort, relying on the falling momentum to swing it back around to the top position. Now try to hold it pointing directly sideways theough. You would need to have pretty strong wrists to do that, if you can manage it at all.
Since your load is an arbitrary shape within a bounding box, I think you are better off calculating your weight distribution according to the worst case. For your system, this would be with your full 25kg load being at one of the top corners of the box.
When the rod is rotated such that the load is at its furthest extent, it is $$\sqrt{40^2+(50/2)^2}\approx47.17$$cm from the rod.
Calculate this torque and add it to the torque required to turn at the speed you want and you should have a motor sized for your application.
If you size your motor for worst case load, then it should should be more than adequate for any load you throw at it. If you size it for anything less, then you may find that some loads exceed the capabilities of your system, so you may need to add additional sensors or control complexity to degrade gracefully in an overload situation.
• Yeah that makes sense! So we get the largest possible moment arm and assume all the weight to be concentrated there to get the max torque that would ever be required. So 25 * g * 0.4717 = 115 N-m . Now that is a huge number! – rookie Oct 19 '18 at 16:43
• What does seem interesting is that the largest moment arm would feature somewhere near 90 degrees (roughly). Wouldn't the holding torque need to be high when say the platform has moved closer to 180 degree (or a little less)? – rookie Oct 19 '18 at 16:49
• Assuming 0 degrees is top dead centre, 180 degrees would require no holding torque, as the weight would be held by the rod bearings rather than by the motor. See my explanation about why SCARA arms can be better than Articulate arms in some circumstances in my answer to Which type of actuator will be suitable for a very strong robot arm. – Mark Booth Oct 22 '18 at 8:24
First of all, please draw the axes on the object above to make it easier for others to help you.
If you are going to rotate the cuboid from the end of the rod, you should also be using a parallel axes theorem to translate the moment of inertia equivalent to the length of the rod as well.
From my understanding, you are trying to lift the cuboid by attaching a motor at the end of the rod, am I right? This is where the clearly labeled axes are important.
• Okay I changed my diagram, may be makes more sense now? – rookie Oct 19 '18 at 0:52 | {} |
# System size expansion
The system size expansion, also known as van Kampen's expansion or the Ω-expansion, is a technique pioneered by Nico van Kampen[1] used in the analysis of stochastic processes. Specifically, it allows one to find an approximation to the solution of a master equation with nonlinear transition rates. The leading order term of the expansion is given by the linear noise approximation, in which the master equation is approximated by a Fokker–Planck equation with linear coefficients determined by the transition rates and stoichiometry of the system.
Less formally, it is normally straightforward to write down a mathematical description of a system where processes happen randomly (for example, radioactive atoms randomly decay in a physical system, or genes that are expressed stochastically in a cell). However, these mathematical descriptions are often too difficult to solve for the study of the systems statistics (for example, the mean and variance of the number of atoms or proteins as a function of time). The system size expansion allows one to obtain an approximate statistical description that can be solved much more easily than the master equation.
## Preliminaries
Systems that admit a treatment with the system size expansion may be described by a probability distribution ${\displaystyle P(X,t)}$, giving the probability of observing the system in state ${\displaystyle X}$ at time ${\displaystyle t}$. ${\displaystyle X}$ may be, for example, a vector with elements corresponding to the number of molecules of different chemical species in a system. In a system of size ${\displaystyle \Omega }$ (intuitively interpreted as the volume), we will adopt the following nomenclature: ${\displaystyle \mathbf {X} }$ is a vector of macroscopic copy numbers, ${\displaystyle \mathbf {x} =\mathbf {X} /\Omega }$ is a vector of concentrations, and ${\displaystyle \mathbf {\phi } }$ is a vector of deterministic concentrations, as they would appear according to the rate equation in an infinite system. ${\displaystyle \mathbf {x} }$ and ${\displaystyle \mathbf {X} }$ are thus quantities subject to stochastic effects.
A master equation describes the time evolution of this probability.[1] Henceforth, a system of chemical reactions[2] will be discussed to provide a concrete example, although the nomenclature of "species" and "reactions" is generalisable. A system involving ${\displaystyle N}$ species and ${\displaystyle R}$ reactions can be described with the master equation:
${\displaystyle {\frac {dP(\mathbf {X} ,t)}{dt}}=\Omega \sum _{j=1}^{R}\left(\prod _{i=1}^{N}\mathbb {E} ^{-S_{ij}}-1\right)f_{j}(\mathbf {x} ,\Omega )P(\mathbf {X} ,t).}$
Here, ${\displaystyle \Omega }$ is the system size, ${\displaystyle \mathbb {E} }$ is an operator which will be addressed later, ${\displaystyle S_{ij}}$ is the stoichiometric matrix for the system (in which element ${\displaystyle S_{ij}}$ gives the stoichiometric coefficient for species ${\displaystyle i}$ in reaction ${\displaystyle j}$), and ${\displaystyle f_{j}}$ is the rate of reaction ${\displaystyle j}$ given a state ${\displaystyle \mathbf {x} }$ and system size ${\displaystyle \Omega }$.
${\displaystyle \mathbb {E} ^{-S_{ij}}}$ is a step operator,[1] removing ${\displaystyle S_{ij}}$ from the ${\displaystyle i}$th element of its argument. For example, ${\displaystyle \mathbb {E} ^{-S_{23}}f(x_{1},x_{2},x_{3})=f(x_{1},x_{2}-S_{23},x_{3})}$. This formalism will be useful later.
The above equation can be interpreted as follows. The initial sum on the RHS is over all reactions. For each reaction ${\displaystyle j}$, the brackets immediately following the sum give two terms. The term with the simple coefficient −1 gives the probability flux away from a given state ${\displaystyle \mathbf {X} }$ due to reaction ${\displaystyle j}$ changing the state. The term preceded by the product of step operators gives the probability flux due to reaction ${\displaystyle j}$ changing a different state ${\displaystyle \mathbf {X'} }$ into state ${\displaystyle \mathbf {X} }$. The product of step operators constructs this state ${\displaystyle \mathbf {X'} }$.
### Example
For example, consider the (linear) chemical system involving two chemical species ${\displaystyle X_{1}}$ and ${\displaystyle X_{2}}$ and the reaction ${\displaystyle X_{1}\rightarrow X_{2}}$. In this system, ${\displaystyle N=2}$ (species), ${\displaystyle R=1}$ (reactions). A state of the system is a vector ${\displaystyle \mathbf {X} =\{n_{1},n_{2}\}}$, where ${\displaystyle n_{1},n_{2}}$ are the number of molecules of ${\displaystyle X_{1}}$ and ${\displaystyle X_{2}}$ respectively. Let ${\displaystyle f_{1}(\mathbf {x} ,\Omega )={\frac {n_{1}}{\Omega }}=x_{1}}$, so that the rate of reaction 1 (the only reaction) depends on the concentration of ${\displaystyle X_{1}}$. The stoichiometry matrix is ${\displaystyle (-1,1)^{T}}$.
{\displaystyle {\begin{aligned}{\frac {dP(\mathbf {X} ,t)}{dt}}&=\Omega \left(\mathbb {E} ^{-S_{11}}\mathbb {E} ^{-S_{21}}-1\right)f_{1}\left({\frac {\mathbf {X} }{\Omega }}\right)P(\mathbf {X} ,t)\\&=\Omega \left(f_{1}\left({\frac {\mathbf {X} +\mathbf {\Delta X} }{\Omega }}\right)P\left(\mathbf {X} +\mathbf {\Delta X} ,t\right)-f_{1}\left({\frac {\mathbf {X} }{\Omega }}\right)P\left(\mathbf {X} ,t\right)\right),\end{aligned}}}
where ${\displaystyle \mathbf {\Delta X} =\{1,-1\}}$ is the shift caused by the action of the product of step operators, required to change state ${\displaystyle \mathbf {X} }$ to a precursor state ${\displaystyle \mathbf {X} '}$.
## Linear noise approximation
If the master equation possesses nonlinear transition rates, it may be impossible to solve it analytically. The system size expansion utilises the ansatz that the variance of the steady-state probability distribution of constituent numbers in a population scales like the system size. This ansatz is used to expand the master equation in terms of a small parameter given by the inverse system size.
Specifically, let us write the ${\displaystyle X_{i}}$, the copy number of component ${\displaystyle i}$, as a sum of its "deterministic" value (a scaled-up concentration) and a random variable ${\displaystyle \xi }$, scaled by ${\displaystyle \Omega ^{1/2}}$:
${\displaystyle X_{i}=\Omega \phi _{i}+\Omega ^{1/2}\xi _{i}.}$
The probability distribution of ${\displaystyle \mathbf {X} }$ can then be rewritten in the vector of random variables ${\displaystyle \xi }$:
${\displaystyle P(\mathbf {X} ,t)=P(\Omega \mathbf {\phi } +\Omega ^{1/2}\mathbf {\xi } )=\Pi (\mathbf {\xi } ,t).}$
Let us consider how to write reaction rates ${\displaystyle f}$ and the step operator ${\displaystyle \mathbb {E} }$ in terms of this new random variable. Taylor expansion of the transition rates gives:
${\displaystyle f_{j}(\mathbf {x} )=f_{j}(\mathbf {\phi } +\Omega ^{-1/2}\mathbf {\xi } )=f_{j}(\mathbf {\phi } )+\Omega ^{-1/2}\sum _{i=1}^{N}{\frac {\partial f_{j}(\mathbf {\phi } )}{\partial \phi _{i}}}\xi _{i}+O(\Omega ^{-1}).}$
The step operator has the effect ${\displaystyle \mathbb {E} f(n)\rightarrow f(n+1)}$ and hence ${\displaystyle \mathbb {E} f(\xi )\rightarrow f(\xi +\Omega ^{-1/2})}$:
${\displaystyle \prod _{i=1}^{N}\mathbb {E} ^{-S_{ij}}\simeq 1-\Omega ^{-1/2}\sum _{i}S_{ij}{\frac {\partial }{\partial \xi _{i}}}+{\frac {\Omega ^{-1}}{2}}\sum _{i}\sum _{k}S_{ij}S_{kj}{\frac {\partial ^{2}}{\partial \xi _{i}\,\partial \xi _{k}}}+O(\Omega ^{-3/2}).}$
We are now in a position to recast the master equation.
{\displaystyle {\begin{aligned}&{}\quad {\frac {\partial \Pi (\mathbf {\xi } ,t)}{\partial t}}-\Omega ^{1/2}\sum _{i=1}^{N}{\frac {\partial \phi _{i}}{\partial t}}{\frac {\partial \Pi (\mathbf {\xi } ,t)}{\partial \xi _{i}}}\\&=\Omega \sum _{j=1}^{R}\left(-\Omega ^{-1/2}\sum _{i}S_{ij}{\frac {\partial }{\partial \xi _{i}}}+{\frac {\Omega ^{-1}}{2}}\sum _{i}\sum _{k}S_{ij}S_{kj}{\frac {\partial ^{2}}{\partial \xi _{i}\,\partial \xi _{k}}}+O(\Omega ^{-3/2})\right)\\&{}\qquad \times \left(f_{j}(\mathbf {\phi } )+\Omega ^{-1/2}\sum _{i}{\frac {\partial f_{j}(\mathbf {\phi } )}{\partial \phi _{i}}}\xi _{i}+O(\Omega ^{-1})\right)\Pi (\mathbf {\xi } ,t).\end{aligned}}}
This rather frightening expression makes a bit more sense when we gather terms in different powers of ${\displaystyle \Omega }$. First, terms of order ${\displaystyle \Omega ^{1/2}}$ give
${\displaystyle \sum _{i=1}^{N}{\frac {\partial \phi _{i}}{\partial t}}{\frac {\partial \Pi (\mathbf {\xi } ,t)}{\partial \xi _{i}}}=\sum _{i=1}^{N}\sum _{j=1}^{R}S_{ij}f_{j}(\mathbf {\phi } ){\frac {\partial \Pi (\mathbf {\xi } ,t)}{\partial \xi _{j}}}.}$
These terms cancel, due to the macroscopic reaction equation
${\displaystyle {\frac {\partial \phi _{i}}{\partial t}}=\sum _{j=1}^{R}S_{ij}f_{j}(\mathbf {\phi } ).}$
The terms of order ${\displaystyle \Omega ^{0}}$ are more interesting:
${\displaystyle {\frac {\partial \Pi (\mathbf {\xi } ,t)}{\partial t}}=\sum _{j}\left(\sum _{ik}-S_{ij}{\frac {\partial f_{j}}{\partial \phi _{k}}}{\frac {\partial (\xi _{k}\Pi (\mathbf {\xi } ,t))}{\partial \xi _{i}}}+{\frac {1}{2}}f_{j}\sum _{ik}S_{ij}S_{kj}{\frac {\partial ^{2}\Pi (\mathbf {\xi } ,t)}{\partial \xi _{i}\,\partial \xi _{k}}}\right),}$
which can be written as
${\displaystyle {\frac {\partial \Pi (\mathbf {\xi } ,t)}{\partial t}}=-\sum _{ik}A_{ik}{\frac {\partial (\xi _{k}\Pi )}{\partial \xi _{i}}}+{\frac {1}{2}}\sum _{ik}[\mathbf {BB} ^{T}]_{ik}{\frac {\partial ^{2}\Pi }{\partial \xi _{i}\,\partial \xi _{k}}},}$
where
${\displaystyle A_{ik}=\sum _{j=1}^{R}S_{ij}{\frac {\partial f_{j}}{\partial \phi _{k}}}={\frac {\partial (\mathbf {S} _{i}\cdot \mathbf {f} )}{\partial \phi _{k}}},}$
and
${\displaystyle [\mathbf {BB} ^{T}]_{ik}=\sum _{j=1}^{R}S_{ij}S_{kj}f_{j}(\mathbf {\phi } )=[\mathbf {S} \,{\mbox{diag}}(f(\mathbf {\phi } ))\,\mathbf {S} ^{T}]_{ik}.}$
The time evolution of ${\displaystyle \Pi }$ is then governed by the linear Fokker–Planck equation with coefficient matrices ${\displaystyle \mathbf {A} }$ and ${\displaystyle \mathbf {BB} ^{T}}$ (in the large-${\displaystyle \Omega }$ limit, terms of ${\displaystyle O(\Omega ^{-1/2})}$ may be neglected, termed the linear noise approximation). With knowledge of the reaction rates ${\displaystyle \mathbf {f} }$ and stoichiometry ${\displaystyle S}$, the moments of ${\displaystyle \Pi }$ can then be calculated.
## Software
The linear noise approximation has become a popular technique for estimating the size of intrinsic noise in terms of coefficients of variation and Fano factors for molecular species in intracellular pathways. The second moment obtained from the linear noise approximation (on which the noise measures are based) are exact only if the pathway is composed of first-order reactions. However bimolecular reactions such as enzyme-substrate, protein-protein and protein-DNA interactions are ubiquitous elements of all known pathways; for such cases, the linear noise approximation can give estimates which are accurate in the limit of large reaction volumes. Since this limit is taken at constant concentrations, it follows that the linear noise approximation gives accurate results in the limit of large molecule numbers and becomes less reliable for pathways characterized by many species with low copy numbers of molecules.
A number of studies have elucidated cases of the insufficiency of the linear noise approximation in biological contexts by comparison of its predictions with those of stochastic simulations.[3][4] This has led to the investigation of higher order terms of the system size expansion that go beyond the linear approximation. These terms have been used to obtain more accurate moment estimates for the mean concentrations and for the variances of the concentration fluctuations in intracellular pathways. In particular, the leading order corrections to the linear noise approximation yield corrections of the conventional rate equations.[5] Terms of higher order have also been used to obtain corrections to the variances and covariances estimates of the linear noise approximation.[6][7] The linear noise approximation and corrections to it can be computed using the open source software intrinsic Noise Analyzer. The corrections have been shown to be particularly considerable for allosteric and non-allosteric enzyme-mediated reactions in intracellular compartments.
## References
1. ^ a b c van Kampen, N. G. (2007) "Stochastic Processes in Physics and Chemistry", North-Holland Personal Library
2. ^ Elf, J. and Ehrenberg, M. (2003) "Fast Evaluation of Fluctuations in Biochemical Networks With the Linear Noise Approximation", Genome Research, 13:2475–2484.
3. ^ Hayot, F. and Jayaprakash, C. (2004), "The linear noise approximation for molecular fluctuations within cells", Physical Biology, 1:205
4. ^ Ferm, L. Lötstedt, P. and Hellander, A. (2008), "A Hierarchy of Approximations of the Master Equation Scaled by a Size Parameter", Journal of Scientific Computing, 34:127
5. ^ Grima, R. (2010) "An effective rate equation approach to reaction kinetics in small volumes: Theory and application to biochemical reactions in nonequilibrium steady-state conditions", The Journal of Chemical Physics, 132:035101
6. ^ Grima, R. and Thomas, P. and Straube, A.V. (2011), "How accurate are the nonlinear chemical Fokker-Planck and chemical Langevin equations?", The Journal of Chemical Physics, 135:084103
7. ^ Grima, R. (2012), "A study of the accuracy of moment-closure approximations for stochastic chemical kinetics", The Journal of Chemical Physics, 136: 154105 | {} |
A- A+
Alt. Display
# Score Following as a Multi-Modal Reinforcement Learning Problem
## Abstract
Score following is the process of tracking a musical performance (audio) in a corresponding symbolic representation (score). While methods using computer-readable score representations as input are able to achieve reliable tracking results, there is little research on score following based on raw score images. In this paper, we build on previous work that formulates the score following task as a multi-modal Markov Decision Process (MDP). Given this formal definition, one can address the problem of score following with state-of-the-art deep reinforcement learning (RL) algorithms. In particular, we design end-to-end multi-modal RL agents that simultaneously learn to listen to music recordings, read the scores from images of sheet music, and follow the music along in the sheet. Using algorithms such as synchronous Advantage Actor Critic (A2C) and Proximal Policy Optimization (PPO), we reproduce and further improve existing results. We also present first experiments indicating that this approach can be extended to track real piano recordings of human performances. These audio recordings are made openly available to the research community, along with precise note-level alignment ground truth.
Keywords:
How to Cite: Henkel, F., Balke, S., Dorfer, M. and Widmer, G., 2019. Score Following as a Multi-Modal Reinforcement Learning Problem. Transactions of the International Society for Music Information Retrieval, 2(1), pp.67–81. DOI: http://doi.org/10.5334/tismir.31
Published on 20 Nov 2019
Accepted on 12 Sep 2019 Submitted on 01 Feb 2019
## 1. Introduction
Score following is a long-standing research problem in Music Information Retrieval (MIR). It lies at the heart of applications such as automatic page turning (Arzt et al., 2008), automatic accompaniment (Cont, 2010; Raphael, 2010) or the synchronization of visualizations in live concerts (Arzt et al., 2015; Prockup et al., 2013). Score following can be seen as an online variant of music synchronization where the task is to align a given music recording to its corresponding musical score (see Müller, 2015; Thomas et al., 2012, for overviews). However, in score following scenarios, the music recording is not known a priori and the systems need to react to the ongoing performance. Many traditional systems use online variants of dynamic time warping (DTW) (Dixon and Widmer, 2005; Arzt, 2016) or hidden Markov models (Orio et al., 2003; Cont, 2006; Schwarz et al., 2004; Nakamura et al., 2015). However, these approaches usually rely on a symbolic, computer-readable representation of the score, such as MusicXML or MIDI. This symbolic representation is created either manually (e.g., through the time-consuming process of (re-)setting the score in a music notation program), or automatically, via optical music recognition (OMR) (Hajič jr and Pecina, 2017; Byrd and Simonsen, 2015; Balke et al., 2015), which—depending of the quality of the scanned sheet music—may require additional manual checking and corrections. To bypass these additional steps, Dorfer et al. (2016) propose a multi-modal deep neural network that directly learns to match sheet music and audio in an end-to-end fashion. Given short excerpts of audio and the corresponding sheet music, the network learns to predict which location in the given sheet image best matches the current audio excerpt. In this setup, score following can be formulated as a multi-modal localization task.
Recently, Dorfer et al. (2018b) formulated the score following task as a Markov Decision Process (MDP), which enabled them to use state-of-the-art deep reinforcement learning (RL) algorithms to teach an agent to follow along an audio recording in images of scanned sheet music, as depicted in Figure 1. The task of the agent is to navigate through the score by adapting its reading speed in reaction to the currently playing performance. As ground truth for this learning task, we assume that we have a collection of piano pieces represented as aligned pairs of audio recordings and sheet music images. The preparation of such a collection, including the entire alignment process, is described in detail by Dorfer et al. (2018a). In general, this scenario constitutes an interesting research framework that addresses aspects of both the application of multi-modal learning and following in music, and advanced reinforcement learning.
Figure 1
Sketch of score following in sheet music. Given the incoming audio, the score follower has to track the corresponding position in the score (image).
The specific contributions of the present work are as follows:
1. Based on the findings of Dorfer et al. (2018b), we extend the experiments with an additional policy gradient method, namely, Proximal Policy Optimization (PPO) (Schulman et al., 2017). Using PPO for our score-following scenario further improves the system’s performance. This confirms one of the concluding hypotheses of Dorfer et al. (2018b), that improved learning algorithms directly translate into an improvement in our application scenario.
2. We provide extensive baseline experiments using optical music recognition and an online variant of DTW. The results indicate that our RL approach is a viable alternative to the OMR-DTW strategy, yielding competitive performance on the used datasets without additional preprocessing steps such as OMR.
3. All experiments so far were based on synthetic data, with audio synthesized directly from the score. We report on first experiments with recordings of 16 real piano performances. The recorded pieces belong to the test split of the Multi-modal Sheet Music Dataset (MSMD) (Dorfer et al., 2018a), which is also used in our other experiments. The results on this new dataset suggest that our agents are starting to generalize to the real-world scenario even though they were solely trained on synthetic data.
4. We make this set of piano recordings openly available to the research community, along with the ground-truth annotations (precise alignments of played notes to corresponding note heads in the sheet music).
In addition to quantitative experiments, we also take a look at the feature space learned by our agents, and what aspects of the state representation they tend to base their decisions on. To this end, we briefly present a t-SNE projection of a specific hidden network layer, and use a gradient-based attribution method to pinpoint what the agent “looks at” when making a decision. This allows for a sanity check of both model design and model behavior.
The remainder of the article is structured as follows. In Section 2, we start by defining the task of score following as a Markov Decision Process (MDP) and explaining its basic building blocks. Section 3 introduces the concept of Policy Gradient Methods and provides details on three learning algorithms we will use. Section 4 proceeds with a description of our experiments and presents results for the case of synthesized piano data. Section 5 then briefly looks at model interpretability, providing some glimpses into the learned representations and policies. In Section 6, we report on first experiments using real piano recordings instead of synthetic data. Finally, Section 7 summarizes our work and provides an outlook on future research directions.
## 2. Score Following as a Markov Decision Process
In this section, we formulate the task of score following as a Markov Decision Process (MDP), the mathematical foundation for reinforcement learning or, more generally, for the problem of sequential decision making. The notation in this paper closely follows the descriptions given in the book by Sutton and Barto (2018).1
Reinforcement learning can be seen as a computational approach to learning from interactions to achieve a certain predefined goal. Figure 2 provides an overview of the components involved in the score following MDP. The score following agent (or learner) is the active component that interacts with its environment, which in our case is the score following task. The interaction takes place in a closed loop where the environment confronts the agent with a new situation (a state St) and the agent has to respond by making a decision, selecting one out of a predefined set of possible actions At. After each action taken the agent receives the next state St+1 and a numerical reward signal Rt+1 indicating how well it is doing in achieving the overall goal. Informally, the agent’s goal in our case is to track a performance in the score as accurately and robustly as possible; this criterion will be formalized in terms of an appropriate reward signal in Section 2.3. By running the MDP interaction loop we end up with a sequence of states, actions, and rewards S0, A0, R1, S1, A1, R2, S2, A2, R3, …, which is the kind of experience an RL agent is learning its behavior from. We will elaborate on different variants of the learning process in Section 3. The remainder of this section specifies all components of the score following MDP in detail. In practice, our MDP is implemented as an environment in OpenAI-Gym, an open source toolkit for developing and comparing reinforcement learning algorithms (Brockman et al., 2016).
Figure 2
Sketch of the score following MDP. The agent receives the current state of the environment St and a scalar reward signal Rt for the action taken in the previous time step. Based on the current state it has to choose an action (e.g., decide whether to increase, keep or decrease its speed in the score) in order to maximize future reward by correctly following the performance in the score.
### 2.1. Score Following Markov States
Our agents need to operate on two different inputs at the same time, which together form the state St of the MDP: input modality one is a sliding window of the sheet image of the current piece, and modality two is an audio spectrogram excerpt of the most recently played music (~ 2 seconds). Figure 3 shows an example of this input data for a piece by J. S. Bach. Given the audio excerpt as an input the agent’s task is to navigate through the global score to constantly receive sheet windows from the environment that match the currently playing music. How this interaction with the score takes place is explained in the next subsection. The important part for now is to note that score following embodies dynamics which have to be captured by our state encoding, in order for the process to satisfy the Markov property. The Markov property means that a future state only depends on the current state, not on the past, i.e., p(St+1|St, St–1, St–2, …, S0) = p(St+1|St). While there are ways to tackle problems where the Markov property is violated, it is desirable to formalize environments in such a way that the state transition process is Markovian (Sutton and Barto, 2018). Therefore, we extend the state representation by adding the one step differences (Δ) of both the score and the spectrogram. With the Δ-image of the score and the Δ-spectrogram, a state contains all the information needed by the agent to determine where and how fast it is moving along in the sheet image.
Figure 3
Markov state of the score following MDP: the current sheet sliding window and spectrogram excerpt. To capture the dynamics of the environment we also add the one step differences (Δ) w.r.t. the previous time step (state).
### 2.2. Agents, Actions, and Policies
The next item in the MDP (Figure 2) is the agent, which is the component interacting with the environment by taking actions as a response to states received. As already mentioned, we interpret score following as a multi-modal control problem where the agent decides how fast it needs to progress in the score. In more precise terms, the agent controls its score progression speed νpxl in pixels per time step by selecting action ${A}_{t}\in \mathcal{A}:=\left\{-\Delta {\nu }_{\mathit{\text{pxl}}},\text{\hspace{0.17em}\hspace{0.17em}}0,\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}+\Delta {\nu }_{\mathit{\text{pxl}}}\right\}$ after receiving state St in each time step $t\in {ℕ}_{0}$. Actions ±Δνpxl increase or decrease the speed by the constant Δνpxl pixels per time step. Action At = 0 keeps it unchanged. To give an example: a pixel speed of νpxl = 14 would shift the sliding sheet window 14 pixels forward (to the right) in the global unrolled score. Restricting the action space to three possible actions is a design choice and in theory one could use arbitrarily many actions or even a continuous action space. However, initial experiments showed that this is harder to learn for the agent and leads to an overall worse performance. Theoretically, our restricted set of actions might cause problems if a piece starts very fast, as the agent would not be able to speed up quickly enough; however, we did not find this to be a problem in our data.
Finally, we introduce the concept of a policy π (a|s) to define an agent’s behavior. π is a conditional probability distribution over actions conditioned on the current state. Given a state s, it computes an action selection probability π (a|s) for each of the candidate actions $a\in \mathcal{A}$. The probabilities are then used for sampling one of the possible actions. In Section 3 we will parameterize policy π (a|s;θ) and explain how it can be learned by using deep neural networks as function approximators.
### 2.3. Goal Definition: Reward Signal and State Values
In order to learn a useful action selection policy, the agent needs feedback. This means that we need to define how to report back to the agent how well it does in accomplishing the task and, more importantly, what the task actually is.
The one component in an MDP that defines the overall goal is the reward signal ${R}_{t}\in ℝ$. It is provided by the environment in the form of a scalar, each time the agent performs an action. The sole objective of an RL agent is to maximize the cumulative reward over time. Note that achieving this objective requires foresight and planning, as actions leading to high instantaneous reward might lead to unfavorable situations in the future, and vice versa. To quantify this long-term success, RL introduces the return G, defined as the discounted cumulative future reward: Gt = Rt+1 + γRt+2 + γ2Rt+3 + …. The discount rate γ ∈ (0, 1) is a hyper-parameter assigning less weight to future rewards.
Figure 4 summarizes the reward computation in our score following MDP. Given annotated training data as mentioned in the introduction, the environment knows, for each onset time in the audio, the true target position x in the score. From this, and the current position $\stackrel{^}{x}$ of the agent, we compute the current tracking error as ${d}_{x}=\stackrel{^}{x}-x$, and define the reward signal r (dx) within a predefined tracking window [x – b, x + b] around target position x as:
(1)
$r\left({d}_{x}\right)=1.0\text{\hspace{0.17em}\hspace{0.17em}}-\text{\hspace{0.17em}\hspace{0.17em}}\frac{|{d}_{x}|}{b}.$
Figure 4
Reward definition in the score following MDP. The reward Rt (range [0, 1]) decays linearly with the agent’s distance dx from the current true score position x.
Thus, the reward per time step reaches its maximum of 1.0 when the agent’s position is identical to the target position, and decays linearly towards 0.0 as the tracking error reaches the maximum permitted value b given by the window size. Whenever the absolute tracking error exceeds b (the agent drops out of the window), we reset the score following game (back to start of score, first audio frame). As an RL agent’s sole objective is to maximize cumulative future reward Gt, it will learn to match the correct position in the score and not to lose its target by dropping out of the window. We define the target onset, corresponding to the target position in the score, as the rightmost frame in the spectrogram excerpt. This allows to run the agents on-line, introducing only the delay required to compute the most recent spectrogram frame. In practice, we linearly interpolate the score positions for spectrogram frames between two subsequent onsets in order to produce a continuous and stronger learning signal for training.
We will further define the State-Value Function as ${\nu }_{\pi }\left(s\right):=\text{\hspace{0.17em}\hspace{0.17em}}\mathbb{E}\left[{G}_{t}|{S}_{t}=s\right]$ which is the expected return given that we are in a certain state s and follow our policy π. Intuitively, this measures how good a certain state actually is: it is beneficial for the agent to be in a state with a high value as it will yield a high return in the long run. As with policy π, we use function approximation to predict the state value, denoted by $\stackrel{^}{v}\left(s;\text{w}\right)$ with parameters w. We will see in the next section how state-of-the-art RL algorithms use these value estimates to stabilize the variance-prone process of policy learning.
## 3. Learning To Follow
Given the formal definition of score following as an MDP we now describe how to address it with reinforcement learning. While there is a large variety of RL algorithms, we focus on policy gradient methods, in particular the class of actor-critic methods, due to their reported success in solving control problems (Duan et al., 2016). The learners utilized are REINFORCE with Baseline (Williams, 1992), Synchronous Advantage Actor Critic (A2C) (Mnih et al., 2016; Wu et al., 2017), and Proximal Policy Optimization (PPO) (Schulman et al., 2017), where the latter two are considered state-of-the-art approaches.
### 3.1. Policy and State-Value Approximation via DNNs
In Section 2, we introduced the policy π, determining the behavior of an agent, and value function $\stackrel{^}{v}$, predicting how good a certain state s is with respect to cumulative future reward. Actor-critic methods make use of both concepts. The actor is represented by the policy π and is responsible for selecting the appropriate action in each state. The critic is represented by the value function $\stackrel{^}{v}$ and helps the agent to judge how good the selected actions actually are. In the context of deep RL, both functions are approximated via Deep Neural Networks (DNNs), termed policy and value networks. In the following we denote the parameters of the policy and value network as θ and w, respectively.
Figure 5 shows a sketch of our architecture. Like Dorfer et al. (2016), we use a multi-modal convolutional neural network operating on both sheet music and audio at the same time. The input to the network is exactly the Markov state of the MDP introduced in Section 2.1. The left part of the network processes sheet images, the right part spectrogram excerpts (including Δ images). After low-level representation learning, the two modalities are merged by concatenation and further processed using dense layers. This architecture implies that policy and value networks share the parameters of the lower layers, which is a common choice in RL (Mnih et al., 2016). Finally, there are two output layers: the first represents our policy and predicts the action selection probability π(a|s;θ). It contains three output neurons (one for each possible action) converted into a valid probability distribution via soft-max activation. The second output layer consists of a single linear output neuron predicting the value $\stackrel{^}{v}\left(s;\text{w}\right)$ of the current state. In Section 4 we list the exact architectures used for our experiments.
Figure 5
Multi-modal network architecture used for our score following agents. Given state s, the policy network predicts the action selection probability π (a|s;θ) for the allowed actions a ∈ {–Δνpxl, 0, +Δνpxl}. The value network, sharing parameters with the policy network, provides a state-value estimate ̂v(s;w) for the current state.
### 3.2. Learning a Policy via Policy Gradient
One of the first algorithms proposed for optimizing a policy was REINFORCE (Williams, 1992), a Monte Carlo algorithm that learns by generating entire episodes S0, A0, R1, S1, A1, R2, S2, A2, … of states, actions and rewards by following its current policy π while interacting with the environment. Given this sequence it then updates the parameters θ of the policy network according to the following update rule, replaying the episode time step by time step:
(2)
$\theta ←\text{\hspace{0.17em}\hspace{0.17em}}\theta \text{\hspace{0.17em}\hspace{0.17em}}+\text{\hspace{0.17em}\hspace{0.17em}}\alpha \text{\hspace{0.17em}\hspace{0.17em}}{G}_{t} {\nabla }_{\theta } \text{In} \pi \left({A}_{t} | {S}_{t};\text{\hspace{0.17em}\hspace{0.17em}}\theta \right),$
where α is the step size or learning rate and Gt is the true discounted cumulative future reward (the return) received from time step t onwards. Gradient ∇θ is the direction in parameter space in which to go if we want to increase the selection probability of the respective action. This means whenever the agent did well (achieved a high return Gt), we take larger steps in parameter space towards selecting the responsible actions. By changing the parameters of the policy network, we of course also change our policy (behavior) and we will select beneficial actions more frequently in the future when confronted with similar states.
REINFORCE and policy optimization are known to have high variance in the gradient estimate (Greensmith et al., 2004). This results in slow learning and poor convergence properties. To address this problem, REINFORCE with Baseline (REINFORCEbl) adapts the update rule of Equation (2) by subtracting the estimated state value $\stackrel{^}{v}\left(s;\text{w}\right)$ (see Section 2.3) from the actual return Gt received:
(3)
$\theta ←\text{\hspace{0.17em}\hspace{0.17em}}\theta \text{\hspace{0.17em}\hspace{0.17em}}+\text{\hspace{0.17em}\hspace{0.17em}}\alpha \text{\hspace{0.17em}\hspace{0.17em}}\left({G}_{t}-\text{\hspace{0.17em}\hspace{0.17em}}\stackrel{^}{\nu }\left({S}_{t};\text{\hspace{0.17em}\hspace{0.17em}}w\right) \right)\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{\nabla }_{\theta } \text{In} \pi \left({A}_{t} | {S}_{t};\text{\hspace{0.17em}\hspace{0.17em}}\theta \right).$
This simple adaptation helps to reduce variance and improves convergence. The intuition behind subtracting $\stackrel{^}{v}$ (the baseline) is that, as this term represents the expected or average return, we evaluate the actions we took in a certain state with respect to the average performance we expect in this exact state. This means that if we chose actions that were better than our average performance, we will increase the probability of taking them in the future, as the expression inside the brackets will be positive. If they were worse, the expression will be negative and we thus reduce their probabilities. The value network itself is learned by minimizing the squared difference between the actually received return Gt and the value estimate $\stackrel{^}{v}\left(s;\text{w}\right)$ predicted by the network:
(4)
$w←w\text{\hspace{0.17em}\hspace{0.17em}}+\text{\hspace{0.17em}\hspace{0.17em}}{\alpha }_{w}\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{\nabla }_{w}{\left({G}_{t}-\text{\hspace{0.17em}\hspace{0.17em}}\stackrel{^}{\nu }\left({S}_{t};\text{\hspace{0.17em}\hspace{0.17em}}w\right)\right)}^{2},$
where αw is a separate learning rate for the value network. Note that as the policy and value network share some parameters, we can also jointly optimize them with a single learning rate. REINFORCEbl will be the first learning algorithm considered in our experiments.
Actor-critic methods are an extension of the baseline concept, allowing agents to learn in an online fashion while interacting with the environment. This avoids the need for creating entire episodes prior to learning and allows the agent to incrementally improve its policy step by step. In particular, our actor-critic agent will only look into the future for a fixed number of tmax time steps (in our case 15). This implies that we do not have the actual return Gt available for updating the value function. The solution is to bootstrap the value function (i.e., update the value estimate with estimated values), which is the core characteristic of actor-critic methods.
In this work, we use the Advantage Actor Critic (A2C) algorithm, a synchronous version of the Asynchronous Advantage Actor Critic (A3C) (Mnih et al., 2016; Wang et al., 2016; Wu et al., 2017). One of the core aspects of this algorithm is to run multiple actors (in our case 8) in parallel on different instances of the same kind of environment, which should further help to stabilize training by decorrelating the samples used for updating. We will see in our experiments that this also holds for the score following task and the concept of parallelism further allows us to train the agents faster compared to algorithms like REINFORCE.
The algorithm itself can be seen as a multi-step actor-critic method, i.e., we take a certain number tmax of steps before we apply an update. Furthermore, the algorithm applies entropy regularization as introduced by Williams and Peng (1991): the entropy of the policy is added to the update rule, which should avoid early convergence to non-optimal policies as well as encourage exploration. The idea is to keep the entropy high and thus have more evenly distributed action selection probabilities. As a consequence, different actions will be chosen more frequently which in turn leads to more exploration. For readers unfamiliar with RL, this refers to the general trade-off between exploration and exploitation. Usually we try to learn a policy that is optimal in the sense of yielding the overall highest return. However, an agent that only executes those actions it currently thinks are the best (exploitation), might not discover more rewarding ones. In order to do so, the agent needs to try (explore) all actions to determine which are the best. This, however, contradicts the notion of optimality, as the agent will inevitably have to perform actions that are non-optimal in the long run. We found entropy regularization as a means of exploration to be crucial in our experiments, and the hyper-parameter controlling the influence of this regularization term requires careful tuning.
### 3.4. Proximal Policy Optimization (PPO)
The final algorithm we consider is Proximal Policy Optimization (Schulman et al., 2017). Similar to A2C, PPO is an actor-critic method; it employs the same approach of multiple parallel actors and multi-step updates. However, the objective and strategy for optimizing the policy are conceptually different.
One problem with regular policy gradient methods is sample efficiency, i.e., the number of interactions the agent has to perform within the environment until it is able to solve a given task. High variance in the gradient estimate during the learning process leads to a low sample efficiency and thus the goal is to reduce variance through various methods. For example, PPO uses generalized advantage estimation (GAE) (Schulman et al., 2015). GAE allows us to reduce variance by introducing bias (the well-known bias–variance trade-off (Bishop, 2006)). How much bias we introduce is controlled by an additional hyperparameter λ ∈ [0, 1], where a value closer to zero will result in more bias. Besides GAE, PPO tries to improve sample efficiency further by performing multiple update steps, reusing samples created during interaction. However, in order to do so the objective used for optimizing the policy must be changed, as it would otherwise have a damaging effect on the policy itself (Schulman et al., 2017). Thus, PPO optimizes a clipped surrogate objective function allowing to run multiple epochs of mini-batch updates on the policy. The idea behind this clipping objective is that an update will not drive the new policy too far away from the old one. The clipping itself is controlled by another hyperparameter $\epsilon \in {ℝ}_{>0}$ that can significantly influence the results and therefore again requires proper tuning.
### 3.5. Distinction to Supervised Approaches
While RL offers appealing methods for solving the score following MDP, the question remains what advantages there are compared to a fully supervised approach. Considering the fact that algorithms like A2C and PPO have additional hyperparameters requiring careful tuning, and considering the long training times (for our MSMD data, this can be up to five days), it might seem reasonable to opt for supervised learning. This is theoretically possible, as we have an exact ground truth: a matching between time positions in the audio and the corresponding pixel positions in the score image. Thus we can derive an optimal tempo curve in terms of pixel speed as depicted in Figure 6 and, consequently, an optimal sequence of tempo change actions for each of the training pieces. However, this optimal tempo sequence only offers a continuous solution (e.g., At = 2.5) and is not directly compatible with the discrete action space we defined in our MDP. As several RL algorithms such as PPO and Deep Deterministic Policy Gradient (Lillicrap et al., 2015) are applicable to continuous control problems, we could formulate the MDP in such a way. However, this not only makes the problem harder to learn for the algorithms, but also reintroduces the issue with ambiguous repetitive musical structures as described by Dorfer et al. (2016): there are situations where the current audio excerpts can match to multiple positions within the score excerpt, e.g., a repeatedly played note sequence. If the agent is allowed to freely adapt its position within the score, it is possible that it will jump between such ambiguous structures. The discrete action definition of our MDP alleviates this problem, as such jumps are not possible. In other words, the agent is constrained to read the score in small incremental pixel steps.
Figure 6
Optimal tempo curve and corresponding optimal actions At for a continuous agent (piece: J. S. Bach, BWV994). The At would be the target values for training an agent with supervised, feed-forward regression.
Taking a closer look at an example of such an optimal tempo curve (Figure 6), we observe another problem. The corresponding optimal actions we need as targets for training a supervised agent are zero most of the time. For the remaining steps we observe sparse spikes of varying amplitude. These sparse spikes are hard for a neural network to learn. In fact, we tried training a neural network with a similar structure as given in Figure 5, but ended up with a model that predicts values close to zero for all its inputs. Another drawback of the supervised approach is that during training, the agent would only see situations that correspond to the optimal trajectory. Thus, it would never encounter states where it has to recover itself in order to reach the desired target position. While this could be alleviated through sophisticated data augmentation, the RL framework offers a much more natural solution, as agents are inevitably confronted with imperfect situations, especially at the beginning of the training process.
## 4. Experiments
In the following section, we reproduce the experiments reported by Dorfer et al. (2018b), and extend them with new results obtained with Proximal Policy Optimization (PPO), demonstrating additional improvement brought about by the new deep RL algorithm. Furthermore, we compare our RL approach to two baseline methods, one of which is based on the more “traditional” OMR+DTW pipeline already mentioned in the introduction. In Section 6, we complement this with first results on real human performances.
### 4.1. Experimental Setup
We use two different datasets in our experiments. The first one is a subset of the Nottingham Dataset, comprising 296 monophonic melodies of folk music, partitioned into 187 training, 63 validation and 46 test pieces (Boulanger-Lewandowski et al., 2012). The second one is the Multi-modal Sheet Music Dataset (MSMD) (Dorfer et al., 2018a). The original dataset consists of 479 classical pieces by various composers such as Bach, Mozart, and Beethoven. When visually exploring what our agents learn, we discovered alignment errors in 12 pieces (6 in the training and test split, respectively).2 The cleaned version now consists of 354 training, 19 validation and 94 test pieces. In both cases the sheet music is typeset with Lilypond3 and the audio is synthesized from MIDI using a piano sound font with a sample rate of 22.05 kHz. This automatic rendering process provides the precise audio-sheet music alignments required for training and evaluation. For audio processing we set the computation rate to 20 frames per second and compute log-frequency spectrograms. The FFT is computed with a window size of 2048 samples and post-processed with a logarithmic filterbank allowing only frequencies from 60 Hz to 6 kHz (78 frequency bins).
The spectrogram context visible to the agents is set to 40 frames (2 seconds of audio) and the sliding window sheet images of the unrolled score cover 40 × 150 and 160 × 512 pixels for Nottingham and MSMD, respectively. For MSMD we further downscale the score by a factor of two before presenting it to the network. The network architectures for Nottingham and MSMD are listed in Table 1 and Table 2. We use exponential linear units (Clevert et al., 2016) for all but the two output layers. As optimizer we use the Adam update rule (Kingma and Ba, 2015) with an initial learning rate of 10–4 and default parameters for the running average coefficients (0.9 and 0.999). We then train the models until there is no improvement in the number of tracked onsets on the validation set for 50 epochs and reduce the learning rate by a factor of 10 twice. The tempo change action Δvpxl is 1.0 pixel per time step for both datasets. Table 5 in the Appendix summarizes the hyperparameters used for training the agents.
Table 1
Network architecture used for the Nottingham dataset. Conv (3, stride-1)-16: 3 × 3 convolution, 16 feature maps and stride 1. No zero-padding is applied. We use ELU activation on all layers if not stated otherwise.
Audio (Spectrogram) 78 × 40 Sheet-Image 40 × 150
Conv (3, stride-2)-16 Conv (5, stride-(1, 2))-16
Conv (3, stride-2)-32 Conv (3, stride-2)-32
Conv (3, stride-2)-32 Conv (3, stride-2)-32
Conv (3, stride-1)-64 Conv (3, stride-(1,2))-64
Concatenation + Dense (256)
Dense (256) Dense (256)
Dense (3) – Softmax Dense (1) – Linear
Table 2
Network architecture used for MSMD. DO: Dropout; Conv (3, stride-1)-16: 3 × 3 convolution, 16 feature maps and stride 1. No zero-padding is applied. We use ELU activation on all layers if not stated otherwise.
Audio (Spectrogram) 78 × 40 Sheet-Image 80 × 256
Conv (3, stride-1)-16 Conv (5, stride-(1, 2))-16
Conv (3, stride-1)-16 Conv (3, stride-1)-16
Conv (3, stride-2)-32 Conv (3, stride-2)-32
Conv (3, stride-1)-32 + DO (0.2) Conv (3, stride-1)-32 + DO (0.2)
Conv (3, stride-2)-64 Conv (3, stride-2)-32
Conv (3, stride-2)-96 Conv (3, stride-2)-64 + DO (0.2)
Conv (1, stride-1)-96 + DO (0.2) Conv (3, stride-2)-96
Dense (512) Conv (1, stride-1)-96 + DO (0.2)
Dense (512)
Concatenation + Dense (512)
Dense (256) + DO (0.2) Dense (256) + DO (0.2)
Dense (3) – Softmax Dense (1) – Linear
Recall from Section 2.3 and Figure 4 that from the agent’s position $\stackrel{^}{x}$ and the ground truth position x, we compute the tracking error dx. We fix the size of the tracking window by setting b to be a third of the width of the score excerpt (b = 50 for the Nottingham dataset and b = 170 for MSMD). This error is the basis for our evaluation measures. However, in contrast to training, in the evaluation we only consider time steps where there is actually an onset present in the audio. While interpolating intermediate time steps is helpful for creating a stronger learning signal (Section 2.3), it is not musically meaningful. Specifically, we will report the evaluation statistics mean absolute tracking error $\overline{|{d}_{x}|}$ as well as its standard deviation std (|dx|) over all test pieces. These two measures quantify the accuracy of the score followers. To also measure their robustness we calculate the ratio Ron ∈ [0, 1] of overall tracked onsets as well as the ratio of pieces Rtue ∈ [0, 1] tracked from beginning entirely to the end:
(5)
(6)
An onset counts as tracked if the agent reached it without dropping out of the tracking window. If all onsets of a piece are tracked, i.e., the agent did not drop out of the tracking window, we say the piece is tracked until the end.
### 4.2. Baseline Approaches
In the following, we compare our RL-based agents to two different baselines. The first is the approach described by Dorfer et al. (2016), which models score following as a multi-modal localization task (denoted by MM-Loc in the following): the sheet snippets are split into discrete buckets, and a neural network is trained in a supervised fashion to predict the most probable bucket given the current audio excerpt. Note that this approach works on the raw image data, but it does not take into account any additional temporal context (e.g., contiguity of frames and buckets).
For the second baseline, we apply a variant of Online Dynamic Time Warping (ODTW) (Dixon, 2005). This approach does not work on the raw sheet image data and thus requires further preprocessing, namely, Optical Music Recognition (OMR), which converts the scanned sheet music into a computer-readable format such as MIDI or MusicXML. We consider two different settings for this baseline approach.
In the first setting (which we will call MIDI-ODTW), we assume that we have an ideal OMR system that can perfectly extract a score, in the form of a MIDI file, from the sheet music; to simulate this, we directly use the MIDI data contained in the MSMD dataset. Matching this perfect score to the synthetic performances is an easy task, since both—score and performance—are the same audio recordings at the sample level. We thus consider this method as our theoretical upper bound.
In the second setting (OMR-ODTW), we use a standard OMR system to extract a (possibly flawed) MIDI file from the rendered score image. For our experiments, we chose the open source tool Audiveris.4 Audiveris’s output is in the form of a MusicXML file, which we convert to MIDI (OMR-MIDI) using Musescore.5 To align the extracted score with the performance, we synthesize the MIDI data using FluidSynth,6 extract chroma features (feature rate = 20 Hz) from the synthesized score and the performance audio, and align them using ODTW (Müller, 2015).
Evaluating MIDI-ODTW is trivial as we have a given ground truth matching (the warping path aligning audio and score is the main diagonal of the DTW’s cost matrix). However, for OMR-ODTW, we face the problem that the note-wise annotations are no longer valid, since the OMR system may introduce errors to the extracted score. Thus, comparing this baseline to the proposed RL approach is difficult, as we require notehead alignments between score and the audio to compute our evaluation metrics. To avoid additional manual annotations of the note onset positions in the extracted OMR-MIDI, we evaluate this baseline by measuring the offset of the tracking position relative to a perfect alignment, disregarding local tempo deviations. In other words: the “perfect” alignment would correspond to the main diagonal of the cost matrix (which is no longer a square matrix due to the overall tempo/duration difference between OMR-MIDI and performance MIDI). The underlying assumption here is that no additional bars are inserted or removed in the OMR output, which we verify by a visual inspection of the extracted data. Given these warping paths, we can compute the measures introduced in Section 4.1, by projecting back onto the sheet images.
### 4.3. Experimental Results
Table 3 provides a summary of our experimental results. Our goal was, on the one hand, to reproduce the results by Dorfer et al. (2018b) and, on the other, to underpin their claim that improvements in the field of RL will eventually lead to improvements in the score following task.
Table 3
Comparison of score following approaches. MIDI-ODTW considers a perfectly extracted score MIDI file and aligns it to a performance with ODTW. OMR-ODTW does the same, but uses a score MIDI file extracted by an OMR system. MM-Loc is obtained by using the method presented by Dorfer et al. (2016) with a temporal context of 4 and 2 seconds for Nottingham and MSMD, respectively. For MSMD, we use the models from the references and re-evaluate them on the cleaned data set. For A2C, PPO and REINFORCEbl we report the average over 10 evaluation runs. The mean absolute tracking error and its standard deviation are given in centimeters.
Nottingham (monophonic, 46 test pieces) MSMD (polyphonic, 94 test pieces)
Method Rtue Ron $\overline{|{d}_{x}|}$ std (|dx|) Rtue Ron $\overline{|{d}_{x}|}$ std (|dx|)
MIDI-ODTW (upper bound) 1.00 1.00 0.00 0.00 1.00 1.00 0.00 0.00
OMR-ODTW 0.89 0.95 0.04 0.09 0.77 0.87 0.63 0.98
MM-Loc (Dorfer et al., 2018b) 0.65 0.83 0.08 0.28 0.55 0.60 0.29 1.07
A2C (Dorfer et al., 2018b) 0.96 0.99 0.08 0.12 0.76 0.77 0.69 0.81
REINFORCEbl 0.97 0.99 0.06 0.09 0.59 0.70 1.06 1.07
A2C 0.96 0.99 0.07 0.09 0.75 0.77 0.68 0.82
PPO 0.99 0.99 0.06 0.09 0.81 0.80 0.65 0.81
As described above, we considered two datasets of different complexity: monophonic music in the Nottingham data and polyphonic music in the MSMD data. Table 3 indicates that our adapted implementation and retrained A2C models deliver similar results for both datasets as reported by Dorfer et al. (2018b), with some improvement in alignment precision mainly due to variance in RL. However, the new PPO algorithm brings considerable additional improvement. This does not manifest itself on the Nottingham dataset, which is obviously too simple. On the polyphonic music provided in the MSMD dataset, however, the difference becomes evident: PPO outperforms A2C by 5 percentage points in terms of the number of pieces successfully tracked until the end (Rtue), and by about 3 points in terms of tracked onsets (Ron). Thus, PPO learns more robust tracking behavior, leading also to a slight improvement regarding accuracy.
Also, the results indicate that thanks to our generic formulation of the tracking task, advancements in RL research indeed directly translate into improvements in score following. This is particularly noticeable when we consider the performance of the “old” REINFORCEbl algorithm which, while learning a usable policy for the monophonic dataset, clearly drops behind on the polyphonic MSMD. The new variance reduction methods incorporated in the more advanced A2C and PPO algorithms exhibit their full power here.
While conducting these experiments we made a point of keeping the score-following MDP and the training process the same for both datasets. This is in contrast to Dorfer et al. (2018b), who used a different action space for the Nottingham dataset (Δνpxl = ±0.5 compared to Δνpxl = ±1.0). Here, we only adapt the underlying neural network architecture, making it less deep for the simpler Nottingham data. In this way, we want to provide additional evidence for the generality and robustness of our MDP formulation and the RL algorithms (especially A2C and PPO).
Comparing the RL agents to the supervised localization baseline (MM-Loc), we see that the baseline achieves a lower tracking error. However, it manages to track only 55% of the pieces to the end, compared to the 81% of PPO.
When we compare the results to the ODTW baselines, we observe two things. First, if we had a perfectly working OMR system (MIDI-ODTW), the task itself would become trivial. This method is a theoretical upper bound on the score-following performance and its main purpose is to verify our ODTW implementation. Second, and more interesting, we see that we do not (yet) have such a flawless OMR system. On the Nottingham dataset PPO surpasses OMR-ODTW on all measures except the average alignment error. While OMR-ODTW still outperforms the best RL method in terms of tracked onsets and average alignment error on MSMD, PPO manages to track more pieces and also has a lower standard deviation of the alignment errors. These results are promising and indicate that the RL approach is reasonable and even competitive to existing methods, with the additional advantage of directly working on raw sheet images, without the need of an additional OMR step.
We further note the real-time capabilities of our system. On average (estimated over 1000 trials), it takes approximately 2.04 ms for the agent to process a new incoming frame (corresponding to 50 ms of audio).7 This measure is independent of the piece length, as it is primarily determined by the duration of a forward path through the underlying neural network.
## 5. Taking a Look Inside
As with many neural network-based machine learning models, our agent is something of a black box (Krause et al., 2016). While the experiments show that it is able to track pieces from beginning to end, we do not know why and how an agent’s decision is formed. In the following section we try to gain some insight both into how the learned models organise the state space internally, and how the agents “look at” a state when making a decision.
### 5.1. A Look into Embedding Space: t-SNE
The network architecture given in Figure 5 is structured in such a way that at first the two modalities (score and audio) are processed separately. After several convolutional layers we arrive at a lower-dimensional feature representation for each modality which is flattened and processed by a dense layer. Both are then concatenated to a single feature vector and passed through another dense layer representing the embedding of the two modalities. We now take the output of this 512 dimensional embedding layer and apply t-SNE (t-distributed stochastic neighborhood embedding) (van der Maaten and Hinton, 2008) to project the feature vector into a two dimensional space. The idea behind t-SNE is to project high-dimensional vectors into a lower-dimensional space in such a way that samples that are similar/close to each other are also close to each other in the lower-dimensional space. Figure 7 provides a visualization of the embeddings for all states that our best agent (PPO) visited when evaluated on the MSMD test split.
Figure 7
Two-dimensional t-SNE projection of the 512-dimensional embeddings taken from the network’s concatenation layer (see Figure 5). Each point in the scatter plot corresponds to an audio–score input tuple. The color encodes the predicted value ̂v(s;w). (Figure inspired by Mnih et al. (2015).)
Each point in the plot corresponds to an audio–score input tuple (state s) and is given a color according to the predicted value $\stackrel{^}{v}\left(s;\text{w}\right)$. Remember that this value encodes the agent’s belief of how good a state is, i.e., how much return it can expect if it is currently in this state. After visually exploring the data, we emphasize three clusters. The first cluster (C1) in the upper left corner turns out to comprise the beginnings of pieces. The value of those states is in general high, which is intuitive as the agent can expect the cumulative future reward of the entire pieces. Note that the majority of the remaining embeddings have a similar value. The reason is that all our agents use a discounting factor γ < 1 which limits the temporal horizon of the tracking process. This introduces an asymptotic upper bound on the maximum achievable value.
The second cluster (C2) on the right end corresponds to the piece endings. Here we observe a low value as the agent has learned that it cannot accumulate more reward in such states due to the fact that the piece ends. The third cluster (C3) is less distinct. It contains mainly states with a clef somewhere around the middle of the score excerpt. (These are the result of our way of “flattening” our scores by concatenating single staff systems into one long unrolled score.) We observe mixed values that lie in the middle of the value spectrum. A reason for this might be that these situations are hard for the agent to track accurately, because it has to rapidly adapt its reading speed in order to perform a “jump” over the clef region. This can be tricky, and it can easily happen that the agent loses its target shortly before or after such a jump. Thus, the agent assigns a medium value to these states.
### 5.2. A Look into the Policy: Integrated Gradients
A second relevant question is: what exactly causes the agent to choose a particular action? Different approaches to answering this have recently been explored in the deep learning community (Baehrens et al., 2010; Shrikumar et al., 2017; Sundararajan et al., 2017). One of these is called integrated gradients (Sundararajan et al., 2017). The idea is to explain an agent’s decision by finding out which parts of the input were influential in the prediction of a certain action. This is done by accumulating the gradients of the prediction with respect to multiple scaled variations of the input, which the authors refer to as the path integral. The gradient with respect to the input points is the direction that maximizes the agent’s decision, i.e., the probability of choosing the action that currently has the highest probability. Thus, a high gradient for a certain input feature suggests a high impact on the decision.
In the following, we briefly look at two examples from our score following scenario.8 Figure 8a shows a situation where the agent is behind the true target position. This results in a high predicted probability for increasing the current pixel speed. In the contrary situation in Figure 8b, we observe the highest probability for decreasing the pixel speed. This makes intuitive sense, of course: when we are ahead of the target, the agent should probably slow down; when we are behind, it should speed up. Figure 9 shows the corresponding integrated gradients as a salience map overlaid on top of the state representation, for the situation depicted in Figure 8b. In the upper left part, the agent’s focus in the plain score is shown. We see that some of the note heads around the center (the current tracking position) are highlighted, which means that they have a strong impact on the agent’s decision. If we further consider the spectrogram, it seems that the agent also puts emphasis on the harmonics. Note also that some of the highlighted note heads match to the corresponding highlighted parts in the spectrogram.
Figure 8
Two examples of policy outputs. (a) Agent is behind the target, resulting in a high probability for increasing the pixel speed (π (+Δνpxl|s;θ) = 0.795). (b) Agent is ahead of the target, suggesting a reduction of pixel speed (π (–Δνpxl|s;θ) = 0.903).
Figure 9
Visualization of the agent’s focus on different parts of the input state for the situation shown in Figure 8b. The salience map was created via integrated gradients (Sundararajan et al., 2017), a technique to identify the most relevant input features for the agent’s decision—in this case, for decreasing its pixel speed.
A look at the delta score image (Figure 9, bottom left) reveals that the influential input parts are less sharply centered around the current tracking position than in the score’s salience map. It seems that the agent generally attends to the pixel deltas, which encode the reading speed, regardless of the current musical content. Recalling the definition of the Markov state (see Section 2.1), this is reassuring to see, as this part of the input state was intentionally designed for exactly this purpose.
Regarding the spectrogram delta, we are undecided whether it has beneficial effects on policy learning. As the spectrogram is provided by the environment (one new frame per time step), there is no additional information captured in the delta spectrogram. Nevertheless, it may be beneficial as an additional input feature that highlights note onsets in the audio signal. Preliminary experiments indicate that we might be able to remove this additional input without losing tracking accuracy.
## 6. Investigations on Real Performances
So far, all experiments were based on synthesized piano music. However, in real performances, the agent is confronted with additional challenges, including local and global tempo deviations, playing errors, or varying acoustic conditions (Widmer, 2017). In order to evaluate these challenges in a controlled way, we asked two pianists to perform 16 pieces (split between them) from the MSMD test dataset. For these experiments, we only selected pieces from the test set where the agent exhibited acceptable performance on the synthesized data. During the recording session, the pianists were asked to perform the pieces without going to extremes. Still, the performances include local tempo deviations, as well as additional ornaments like trills (especially in the Bach and Mozart pieces; please consult Table 6 in the Appendix for an overview of the recorded pieces).
The actual recordings took place at our institute in a regular office environment (room dimensions c. 5 × 6 meters). The instrument was a Yamaha AvantGrand N2 hybrid piano (A4 = 440 Hz). From the piano, we simultaneously recorded the MIDI signal (performance MIDI), the direct stereo output (dry), and the acoustic room signal by placing an omni-directional microphone about 2 meters away from the piano. The different signals can be related to different levels of difficulty, e.g., the room signal is harder to track than the dry signal since additional room reflections influenced the recording.
To establish a ground truth, we used the performance MIDI files, the corresponding score MIDI file, and a rendered version of the score, where the latter two were taken from the MSMD dataset (i.e., the score images were generated with Lilypond). Using the alignment technique described by Dorfer et al. (2018a), we established the needed connection between note head positions in the scores and the corresponding notes in the performance MIDI file and audio recordings, respectively. Note that this alignment is automatically generated and not perfect for all of the notes.
To compare the performance of our RL agents in the context of real performances, we conduct four experiments with increasing level of difficulty. First, the algorithms are evaluated on the original synthesized MIDI score, which provides an upper performance bound: we do not expect the agents to do better on the set of real performances. Second, we synthesize the MIDI data we got from the real recordings with the same piano synthesizer used during training. This experiment is meant to tell us how the agents cope with performance variations that are not necessarily encoded in the score, but keeping the audio conditions unchanged. For the third and fourth experiments, we use the audio from the direct output of the piano and the room microphone, respectively, instead of synthesizing it from MIDI. This gives us insight into how the agents are able to generalize to real world audio. These four experiments comprise different challenges, where intuitively the first is the easiest and the last one should be the hardest due to noisy recording conditions. We compare our agents to the same baseline approaches as in Section 4. The ground truth for the ODTW baselines is derived as described in Section 4.2, by using the given automatic alignment.
The results of these experiments are summarized in Table 4. In general, we observe that real performances introduce a larger mean error and standard deviation. As expected, we also see a performance decrease with increasing difficulty: best results are achieved on the original synthesized MIDI, followed by the synthesized performance MIDI and the direct out recording (with the exception of REINFORCEbl). For the room recordings we observe the weakest performance.
Table 4
Comparison of score following approaches on real performances. To get a more robust estimate of the performance of the RL agents (REINFORCEbl, A2C and PPO), we report the average over 50 evaluation runs. MM-Loc is the supervised baseline presented by Dorfer et al. (2016). MIDI-ODTW and OMR-ODTW are the ODTW baselines described in Section 4.2. The mean absolute tracking error and its standard deviation are given in centimeters.
Method Rtue Ron $\overline{|{d}_{x}|}$ std(|dx|)
Original MIDI Synthesized (Score = Performance)
MIDI-ODTW 1.00 1.00 0.00 0.01
OMR-ODTW 0.62 0.80 0.85 1.12
MM-Loc 0.44 0.45 0.38 1.14
REINFORCEbl 0.56 0.59 1.15 1.14
A2C 0.70 0.63 0.65 0.82
PPO 0.74 0.68 0.7 0.87
Performance MIDI Synthesized
MIDI-ODTW 0.81 0.94 0.50 0.76
OMR-ODTW 0.50 0.72 0.90 1.08
MM-Loc 0.25 0.51 0.36 0.99
REINFORCEbl 0.14 0.31 1.80 1.48
A2C 0.58 0.51 0.94 0.94
PPO 0.56 0.50 0.94 1.01
Direct Out
MIDI-ODTW 0.88 0.92 0.59 0.79
OMR-ODTW 0.50 0.67 0.93 1.15
MM-Loc 0.19 0.32 0.55 1.42
REINFORCEbl 0.33 0.43 1.42 1.29
A2C 0.49 0.55 0.97 1.06
PPO 0.51 0.53 1.01 1.11
Room Recording
MIDI-ODTW 0.81 0.93 0.64 0.84
OMR-ODTW 0.50 0.65 0.93 1.09
MM-Loc 0.00 0.19 0.68 1.58
REINFORCEbl 0.08 0.37 1.52 1.34
A2C 0.38 0.50 1.11 1.12
PPO 0.30 0.43 1.26 1.24
REINFORCEbl is again outperformed by both A2C and PPO, but contrary to the results on synthetic data we now observe that the PPO agent performs worse than the A2C agent on the real performances. It might be the case that PPO overfitted to the training conditions and is thus not able to deal with performance variations as well as A2C. However, the problem of overfitting in RL is difficult to address and the object of ongoing research efforts (e.g., Cobbe et al., 2018). Thus, further experiments with a larger test set are necessary to conclude if this is really the case.
Comparing the RL agents to MM-Loc shows that the supervised baseline does not generalize as well and has likely overfitted to the training conditions. The performance of the ODTW baselines is at a similar level over the different experimental settings; however, we see a higher performance deterioration for the OMR baseline (OMR-ODTW) compared to the results in Section 4.3, where score and performance are created from the same MIDI file. As these methods seem to be more robust against different recording conditions (most likely due to the chroma features used to represent the audio), they still exceed the machine learning based approaches in almost all cases. For future work it will be necessary to improve the generalization capabilities of the proposed approach by making it more robust to different acoustic scenarios, e.g., through data augmentation.
## 7. Conclusion and Future Work
In this paper, we investigated the potential of deep RL for the task of online score following on raw sheet images. Using a more advanced learning algorithm than the one used by Dorfer et al. (2018b), we were able not only to reproduce the reported results but also to improve the tracking performance. Given that RL is currently one of the most actively researched areas in machine learning, we expect further advances that we think will directly transfer to score following.
Furthermore, we conducted first experiments involving real performances. While the initial results are promising, there is still a lot of room for improvement. Also, in contrast to most state-of-the-art methods for general music tracking, our method is currently restricted to piano music.
The RL framework as such can be adapted to a variety of different alignment scenarios, given appropriate data. In particular, the input modalities can be exchanged to handle audio–MIDI, audio–lyrics, or MIDI–MIDI alignment scenarios. The latter is of interest for piano accompaniment systems, where actions of the agent could involve triggering of events, e.g., playing an accompaniment in sync with a live performer.
Moreover, we are eager to see deep RL-based approaches in other MIR-related tasks, such as automatic music transcription. This is challenging, because coping with high-dimensional action spaces (e.g., in theory 288 for piano music transcription) is still an open problem in RL research. Still, we think this is an attractive line of research to follow, because of the generality and conceptual simplicity of the reinforcement learning scenario: in principle, we only need to find an appropriate formulation of a task (including Markov state representation and, crucially, a way of generating a not-too-sparse reward signal), and we will immediately benefit from the power of RL with all the exciting developments currently going on in this research area.
## Notes
1As in Sutton and Barto (2018), we denote random variables with capital letters such as state St and instances with small letters such as s.
2The errors are caused by inconsistencies in the way “Da capo” is encoded in Lilypond.
7Tested on a system with a consumer GPU (NVIDIA GEFORCE GTX 1080), 16GB RAM and an Intel i7-7700 CPU.
8More examples and video renditions can be found on the paper’s accompanying website http://www.cp.jku.at/resources/2019_RLScoFo_TISMIR.
## Appendix
In Table 5 we provide a summary of all the hyperparameters used in the training process and for the RL algorithms. In Table 6 an overview of the pieces recorded for Section 6 is given.
Table 5
Hyperparameter overview.
Hyperparameter Value
Adam decay rates (β1, β2) (0.9, 0.999)
Patience 50
Learning rate multiplier 0.1
Refinements 2
Time horizon tmax 15
Number of actors 8
Entropy regularization 0.05
Discount factor γ 0.9
GAE parameter λ 0.95
PPO clipping parameter ɛ 0.2
PPO epochs 1
PPO batch size 120
Table 6
Overview of the pieces from the MSMD dataset that were recorded as real performances. The pieces are played without repetitions.
Composer Piece name Dur. (sec.)
Bach, Johann Sebastian Polonaise in F major, BWV Anh. 117a 47.32
Bach, Johann Sebastian Sinfonia in G minor, BWV 797 99.69
Bach, Johann Sebastian French Suite No. 6 in E major, Menuet, BWV 817 37.21
Bach, Johann Sebastian Partita in E minor, Allemande, BWV 830-2 86.73
Bach, Johann Sebastian Prelude in C major, BWV 924a 40.43
Bach, Johann Sebastian Minuet in F major, BWV Anh. 113 40.49
Bach, Johann Sebastian Minuet in G major, BWV Anh. 116 51.56
Bach, Johann Sebastian Minuet in A minor, BWV Anh. 120 31.32
Chopin, Frédéric François Nocturne in B♭ minor, Op. 9, No. 1 328.92
Mozart, Wolfgang Amadeus Piano Sonata No. 11 in A major, 1st Movt, Variation 1, KV331 56.33
Mussorgsky, Modest Petrovich Pictures at an Exhibition, Promenade III 27.17
Schumann, Robert Album für die Jugend, Op. 68, 1. Melodie 45.50
Schumann, Robert Album für die Jugend, Op. 68, 6. Armes Waisenkind 73.52
Schumann, Robert Album für die Jugend, Op. 68, 8. Wilder Reiter 24.88
Schumann, Robert Album für die Jugend, Op. 68, 16. Erster Verlust 55.83
Schumann, Robert Album für die Jugend, Op. 68, 26. Untitled 74.40
## Reproducibility
The data and code for reproducing our results, along with detailed instructions and further examples, are available online: https://github.com/CPJKU/score_following_game and on the accompanying website http://www.cp.jku.at/resources/2019_RLScoFo_TISMIR.
## Acknowledgements
This project received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement number 670035, project “Con Espressione”). The authors would like to thank Jan Hajič jr and Carlos Eduardo Cancino Chacón—both wonderful colleagues and fabulous piano players—for helping to record the piano performances used in Section 6. Many thanks to the anonymous reviewers and the editors for very helpful (and also a bit challenging) comments and suggestions which helped to improve this manuscript.
## Competing Interests
The authors have no competing interests to declare.
## References
1. Arzt, A. (2016). Flexible and Robust Music Tracking. PhD thesis, Johannes Kepler University Linz.
2. Arzt, A., Frostel, H., Gadermaier, T., Gasser, M., Grachten, M., & Widmer, G. (2015). Artificial Intelligence in the Concertgebouw. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI) (pp. 2424–2430). Buenos Aires, Argentina.
3. Arzt, A., Widmer, G., & Dixon, S. (2008). Automatic Page Turning for Musicians via Real-Time Machine Listening. In Proceedings of the European Conference on Artificial Intelligence (ECAI) (pp. 241–245). Patras, Greece.
4. Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., & Müller, K.-R. (2010). How to Explain Individual Classification Decisions. Journal of Machine Learning Research, 11, 1803–1831.
5. Balke, S., Achankunju, S. P., & Müller, M. (2015). Matching Musical Themes Based on Noisy OCR and OMR Input. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) (pp. 703–707). Brisbane, Australia. DOI: https://doi.org/10.1109/ICASSP.2015.7178060
6. Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer.
7. Boulanger-Lewandowski, N., Bengio, Y., & Vincent, P. (2012). Modeling Temporal Dependencies in High-dimensional Sequences: Application to Polyphonic Music Generation and Transcription. In Proceedings of the 29th International Conference on Machine Learning (ICML). Edinburgh, UK. DOI: https://doi.org/10.1109/ICASSP.2013.6638244
8. Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., & Zaremba, W. (2016). OpenAI Gym. arXiv preprint arXiv:1606.01540.
9. Byrd, D., & Simonsen, J. G. (2015). Towards a Standard Testbed for Optical Music Recognition: Definitions, Metrics, and Page Images. Journal of New Music Research, 44(3), 169–195. DOI: https://doi.org/10.1080/09298215.2015.1045424
10. Clevert, D., Unterthiner, T., & Hochreiter, S. (2016). Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs). In Proceedings of the International Conference on Learning Representations (ICLR) (arXiv:1511.07289).
11. Cobbe, K., Klimov, O., Hesse, C., Kim, T., & Schulman, J. (2018). Quantifying Generalization in Reinforcement Learning. arXiv preprint arXiv:1812.02341.
12. Cont, A. (2006). Realtime Audio to Score Alignment for Polyphonic Music Instruments using Sparse Non-Negative Constraints and Hierarchical HMMs. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) (vol. 5, pp. 245–248). Toulouse, France. DOI: https://doi.org/10.1109/ICASSP.2006.1661258
13. Cont, A. (2010). A Coupled Duration-Focused Architecture for Real-Time Music-to-Score Alignment. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(6), 974–987. DOI: https://doi.org/10.1109/TPAMI.2009.106
14. Dixon, S. (2005). An On-Line Time Warping Algorithm for Tracking Musical Performances. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI) (pp. 1727–1728). Edinburgh, UK.
15. Dixon, S., & Widmer, G. (2005). MATCH: A music alignment tool chest. In Proceedings of the International Conference on Music Information Retrieval (ISMIR) (pp. 492–497). London, UK.
16. Dorfer, M., Arzt, A., & Widmer, G. (2016). Towards Score Following in Sheet Music Images. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR) (pp. 789–795). New York, USA.
17. Dorfer, M., Hajič, J., Jr., Arzt, A., Frostel, H., & Widmer, G. (2018a). Learning Audio–Sheet Music Correspondences for Cross-Modal Retrieval and Piece Identification. Transactions of the International Society for Music Information Retrieval, 1(1), 22–33. DOI: https://doi.org/10.5334/timsir.12
18. Dorfer, M., Henkel, F., & Widmer, G. (2018b). Learning to Listen, Read, and Follow: Score Following as a Reinforcement Learning Game. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR) (pp. 784–791). Paris, France.
19. Duan, Y., Chen, X., Houthooft, R., Schulman, J., & Abbeel, P. (2016). Benchmarking Deep Reinforcement Learning for Continuous Control. In Proceedings of the 33nd International Conference on Machine Learning (ICML) (pp. 1329–1338). New York City, United States.
20. Greensmith, E., Bartlett, P. L., & Baxter, J. (2004). Variance Reduction Techniques for Gradient Estimates in Reinforcement Learning. Journal of Machine Learning Research, 5, 1471–1530.
21. Hajič, J., Jr. and Pecina, P. (2017). The MUSCIMA++ Dataset for Handwritten Optical Music Recognition. In 14th International Conference on Document Analysis and Recognition (ICDAR) (pp. 39–46). New York, United States. DOI: https://doi.org/10.1109/ICDAR.2017.16
22. Kingma, D., & Ba, J. (2015). Adam: A Method for Stochastic Optimization. International Conference on Learning Representations (ICLR) (arXiv:1412.6980).
23. Krause, J., Perer, A., & Ng, K. (2016). Interacting with Predictions: Visual Inspection of Black-box Machine Learning Models. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 5686–5697). ACM. DOI: https://doi.org/10.1145/2858036.2858529
24. Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., & Wierstra, D. (2015). Continuous Control with Deep Reinforcement Learning. arXiv preprint arXiv:1509.02971.
25. Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T. P., Harley, T., Silver, D., & Kavukcuoglu, K. (2016). Asynchronous Methods for Deep Reinforcement Learning. In Proceedings of the 33rd International Conference on Machine Learning (ICML) (pp. 1928–1937). New York City, United States.
26. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., & Hassabis, D. (2015). Human-level Control Through Deep Reinforcement Learning. Nature, 518, 529–533. DOI: https://doi.org/10.1038/nature14236
27. Müller, M. (2015). Fundamentals of Music Processing. Springer Verlag. DOI: https://doi.org/10.1007/978-3-319-21945-5
28. Nakamura, E., Cuvillier, P., Cont, A., Ono, N., & Sagayama, S. (2015). Autoregressive Hidden Semi-Markov Model of Symbolic Music for Score Following. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR) (pp. 392–398). Málaga, Spain.
29. Orio, N., Lemouton, S., & Schwarz, D. (2003). Score Following: State of the Art and New Developments. In Proceedings of the International Conference on New Interfaces for Musical Expression (NIME) (pp. 36–41). Montreal, Canada.
30. Prockup, M., Grunberg, D., Hrybyk, A., & Kim, Y. E. (2013). Orchestral Performance Companion: Using Real-Time Audio to Score Alignment. IEEE Multimedia, 20(2), 52–60. DOI: https://doi.org/10.1109/MMUL.2013.26
31. Raphael, C. (2010). Music Plus One and Machine Learning. In Proceedings of the International Conference on Machine Learning (ICML) (pp. 21–28).
32. Schulman, J., Moritz, P., Levine, S., Jordan, M., & Abbeel, P. (2015). High-dimensional Continuous Control Using Generalized Advantage Estimation. arXiv preprint arXiv:1506.02438.
33. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal Policy Optimization Algorithms. arXiv preprint arXiv:1707.06347.
34. Schwarz, D., Orio, N., & Schnell, N. (2004). Robust Polyphonic MIDI Score Following with Hidden Markov Models. In International Computer Music Conference (ICMC). Miami, Florida, USA.
35. Shrikumar, A., Greenside, P., & Kundaje, A. (2017). Learning Important Features through Propagating Activation Differences. arXiv preprint arXiv:1704.02685.
36. Sundararajan, M., Taly, A., & Yan, Q. (2017). Axiomatic Attribution for Deep Networks. arXiv preprint arXiv:1703.01365.
37. Sutton, R. S., & Barto, A. G. (2018). Reinforcement Learning. MIT Press, 2nd edition.
38. Thomas, V., Fremerey, C., Müller, M., & Clausen, M. (2012). Linking Sheet Music and Audio – Challenges and New Approaches. In M. Müller, M. Goto, & M. Schedl (Eds.), Multimodal Music Processing, volume 3 of Dagstuhl Follow-Ups (pp. 1–22). Dagstuhl, Germany: Schloss Dagstuhl–Leibniz-Zentrum für Informatik.
39. van der Maaten, L., & Hinton, G. (2008). Visualizing Data Using t-SNE. Journal of Machine Learning Research, 9, 2579–2605.
40. Wang, J. X., Kurth-Nelson, Z., Tirumala, D., Soyer, H., Leibo, J. Z., Munos, R., Blundell, C., Kumaran, D., & Botvinick, M. (2016). Learning to Reinforcement Learn. arXiv preprint arXiv:1611.05763.
41. Widmer, G. (2017). Getting Closer to the Essence of Music: The Con Espressione Manifesto. ACM Transactions on Intelligent Systems and Technology (TIST), 8(2), 19. DOI: https://doi.org/10.1145/2899004
42. Williams, R. J. (1992). Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning. Machine Learning, 8, 229–256. DOI: https://doi.org/10.1007/BF00992696
43. Williams, R. J., & Peng, J. (1991). Function Optimization Using Connectionist Reinforcement Learning Algorithms. Connection Science, 3(3), 241–268. DOI: https://doi.org/10.1080/09540099108946587
44. Wu, Y., Mansimov, E., Liao, S., Grosse, R. B., & Ba, J. (2017). Scalable Trust-region Method for Deep Reinforcement Learning Using Kronecker-factored Approximation. CoRR, abs/1708.05144. | {} |
## Overview
In this post, we are going to look at the following neural network models: MobileNet v1[1] & v2[2], SqueezeNet[3], ShuffleNet v1[4] & v2[5], NasNet[6]. We consider the following questions:
1. What in the world do they look like?
2. Why are they fast? Why are they small? Which one is better and Why?
3. Why the authors design them like that?
So, let’s try to solve these doubts step by step.
## MobileNet v1 vs. Standard CNN models
MobileNet v1 is smart enough to decompose the standard convolution operation into two separate operations: depth-wise (or channel-wise) convolution and point-wise convolution.
We can take the following figure as an illustration:
Suppose we have the convolutional layer with kernel size $K$, input size $C_{in}\times H\times W$ and output size $C_{out} \times H \times W$ (stride=1). For a standard convolution operation, the computation complexity, here we use MACC (Multiply-accumulate, also known as MADD), is calculated as (for how to calculate FLOPs or MACC, we kindly recommend this great post: How Fast is my model?):
$$K\times K\times C_{in}\times C_{out}\times H\times W. \label{eq1}$$
With decomposition, the two separate operation parts lead to output feature maps with exactly the same size as the standard counterpart does while with much less computation cost. How does that works?
OK, depth-wise convolution takes as input a single channel and output a single channel for each channel of the input volume, and then concatenates the output channels for the second stage, in which the point-wise convolution takes place. According to this, its corresponding computation cost is:
$$K\times K\times H\times W\times C_{in}.$$
The point-wise convolution is a simple 1x1 convolution (also known as network-in-network), which transfers the $C_{in}\times H\times W$ volume produced by the depth-wise operation to a $C_{out}\times H\times W$ output volume. Since we have dealt with the input volume with a channel-by-channel strategy at first, so the purpose of point-wise operation is to combine the information of different channels and fuse them to new features. The point-wise operation costs
$$1\times 1\times C_{in}\times C_{out}\times H\times W = C_{in}\times C_{out}\times H\times W.$$
As a result, with the above decomposition, the total MACC is
$$K\times K\times H\times W\times C_{in} + C_{in}\times C_{out}. \label{eq2}$$
Compared with equation $\eqref{eq1}$, the reduction of computation is $\eqref{eq2}$/$\eqref{eq1}$ $=\frac{1}{C_{out}} + \frac{1}{K^2}$.
In addition, the number of parameters of the standard convolution filters is $K\times K\times C_{in}\times C_{out}$. With depth-wise and point-wise convolution, the number of parameters becomes $K\times K\times C_{in} + C_{in}\times C_{out} = C_{in}\times (K\times K + C_{out})$. In this way, both computation cost and model size can be considerably reduced. What’s more, this can be further done by applying the Resolution Multipier and Width Pultipier, which reduce the resolution of the input images and channels of all layers by a multipier coefficient.
If you are not clear, the following is the whole MobielNet v1 structure with all the bells and whistles.
The structure was drawn according to the code in https://github.com/marvis/pytorch-mobilenet, where filter in each row of the table takes the input with size written immediately in the same row, and therefore, outputs a volume with size written in the following row, and then, processed by the next filter. Finally, BR means Batch normalization and Relu layers after a certain filter.
What surprised me was that there is no residual module at all, what if we add some residuals or shortcuts like ResNet? Afterall, the author got his purpose and the accuracy on ImageNet classification task is comparable to the one using the standard convolution filters instead as well as other famous CNN models.
## MobileNet v1 vs. SqueezeNet
First, let’s compare these two networks directly,
where, 0.50 MobileNet-160 means halving the channels for all layers and setting the resolution of input images as $160\times 160$. We can see from the table that the only highlight of SqueezeNet is its model size. It is not ignorable that we also need the speed of computation when we embed our model into resource-restricted devices like Mobile phones. It’s hard to say that SqueezeNet is good enough when we see that its MACC is even more than AlexNet, with a large margin.
However, it’s worth thinking why SqueezeNet has so few parameters. Take a look at it basic unit (a fire module):
The basic idea behind SqueezeNet comes from three principles. First, using 1x1 filters as possible as we can; Second, decreasing the number of input channels to 3x3 filters. The last pinciple is to downsample feature maps after the merging operation of residual blocks so that to keep more activations.
By stacking fire modules, we get a small model, while also having numerous computations.
## MobileNet v1 vs. MobileNet v2
Keep it in mind that MobileNet v1’s success attributes to using the depth-wise and point-wise convolutions. These two kinds of filters become the very basic tools for most of the following works focusing on network compression and speeding up, including MobileNet v2, ShuffleNet v1 and v2.
For the MobileNet v2, similar to the above illustration, let’s first take a look at its whole structure. For analysis, we take part of it as the whole structure is stacked with similar components.
In this illustration, the green unit means a residual block while the orange one means a normal block (without residual) with stride 2 to do downsampling. The main characteristic of MobileNet v2 architecture is for every unit or block, it first expands the number of channels by point-wise convolutions, then applies depth-wise convolutions with kernel size 3x3 on the expanded space, and finally projects back to low-channel feature spaces using point-wise convolutions again. For a block doesn’t having to downsample its input volume, an additional residual component is applied to enhance the performance. Another feature is, as illustrated in the above figure with a single B after each block which means Batch normalization only, it doesn’t use non-linearity at the output of blocks. Now, I have the following questions:
1. When building a residual block, why connect the shortcut between two low-channel ends? Why not connect the “fat part” just like the original ResNet does?
2. Why it needs to be “fat” in the middle of block? Why not just keep it slim so that to further reduce its size and parameters? Why not apply ReLu at the end of block?
3. Comparing with ResNet, which applies ReLU on its “slim part” of each block, it seems like the two designing strategies (ResNet block and MobileNet v2 block) conflict with each other, why?
OK, let’s try to answer these questions (if you have any different idea, please do not hesitate to contact me, the email can be found in my profile).
For question 1, there is a intuition when designing MobileNet v2: bottlenecks actually contains all the necessary informations. So it would not cause information loss if we do like that. On the other hand, connecting the “fat parts” is possible, but that also means we should connect two volumes produced by two depth-wise convolutions, sounds strange because we usually connect the outputs of normal convolutions (here a point-wise covolution is a normal 1x1 convolution), but nothing stops trying.
For question 2, we can find our answer from the analysis of ReLU.
ReLu cause information collapse. However, the higher the dimension of the input, the less the degree information collapses. So the high dimension in the middle of block is to avoid information loss. And intuitively, more channels usually means more powerful representative features thus to enhance the discriminability of a model. According to this, it is reasonable not to apply ReLU at the “slim output” of the block.
We can use the same explanation to attack ResNet, which indeed use ReLU on the low-dimensional features. So why is it still so effective? This would attribute to its high dimensions of input and output ends of a ResNet block, which ensure its representative ability even with the ReLU layer in the bottleneck.
The design art of MobileNet v2 is to keep few number of channels for the input and output of each block, while doing more complicated feature extraction inside the block with enough channels. This ensures the extraction of effective and high-level features of the image while reduce the computation cost at the same time, because the main computation cost is from the 1x1 convolution filters (see the following figure).
MobileNet v2 has even less parameters and MACCs than v1. This because MobileNet v1 takes more channels for 1x1 convolutions than v2, leading to much more MACCs. While MobileNet v2 smartly avoid giving many channels to 1x1 convolutions, and do feature extraction mainly via depth-wise convolutions.
## MobileNet v2 vs. ShuffleNet v1 vs. NasNet
Above figure shows that a ShuffleNet v1(1.5) and a MobileNet V2 have the similar model size (3.4M params) and computation cost ($\approx 300$M MACCs), and furthermore, the similar classification accuracy. This means that ShuffleNet v1 is at the same level of MobileNet v2, the two are closely comparable. So, what does a ShuffleNet v1 look like? Click here
Again, we capture part of it to analyse.
Since we realize that the main computation takes place at the 1x1 convolutions, which also accounts for main part of parameters. Unlike MobileNet v2 who solves the problem by reducing number of channels inputted to 1x1 convolutions, ShuffleNet v1 is more straightforward. Specifically, rather than only applying group convolution (for group convolution, see ResNeXt, depth-wise convolution can be regarded as an extreme case of group convolution) on 3x3 filters, it also applies group operation on 1x1 filters. Although it reduces computation cost and number of parameters effectively, it leads to a problem: different groups cannot communicate with each other, thus restrict the power of model.
Shuffle in ShuffleNet v1 provides the solution of above problem by shuffling all the output channels of 1x1 group convolutions as a whole, so that enforce information communication among groups. And the most inspiring thing is the shuffle operation doesn’t take any additional parameters and computationally efficient.
To further reduce model size and computation cost, ShuffleNet v1 also uses BottleNecks as illustrated:
As discussed above, MobileNet v2 and ShuffleNet v1 both focus on reducing computation cost on 1x1 convolutions, while there are still three more differences according to their structures.
1. Difference on how to apply residual. For MobileNet v2, no residual is used when the shape of input volume and output volume of a block doesn’t match. For ShuffleNet v1, when the two doesn’t match, a AveragePool + Concatenation strategy is used to do shortcut connection.
2. According to the above diagram, ShuffleNet v1 quickly downsamples the input image from 224x224 to 56x56, while MobileNet v2 only downsamples its input image to 112x112 in the first stages.
3. According to the logic of MobileNet v2, ReLU layers should apply on “fat layers” rather than bottleneck layers. While ShuffleNet (both v1 and v2) more or less does the opposite (e.g., ReLU after the Compress operator, marked red in the figure). Why?
Well, I think it’s worth trying and see what will happen if we take the ReLU away after the 3x3 convolutions in MobileNet v1 or MobileNet v2 (e.g., only connect the ReLu to the first 1x1 convolution layer of each block mobileNet v2). On the other hand, the reason why ShuffleNet v1 doesn’t connect a ReLU after the 3x3 convolution layers comes from the explanation in Xception, which thought that for shallow features (i.e., the 1-channel deep feature spaces of depth-wise convolutions), non-linearity becomes harmful, possibly due to a loss of information.
NasNet, in which the word “Nas” is an abbreviation of Network architecture search, definitely is a more advanced technology to search for compact and efficient networks. The auto-search algorithms and other very recent research works (works in ICLR 2019, ICML 2019 and CVPR 2019) will be gone through in another post. Let’s proceed to ShuffleNet v2.
## ShuffleNet v2 vs. All
The above methods are based on two principles, small model size and less computation cost. However, in practical applications, efforts taken on the above criterion doesn’t exactly bring a corresponding faster model in hardware equipments. There are some other factors we should take into account when designing an embeddable model for hardware devices – memory access cost (MAC) and battery consuming.
Based on the above findings, ShuffleNet v2 rethinks the previous compression models and proposes four useful designing guidelines.
G1, Equal channel width minimizes MAC (this means letting number of input channels equal to that of output channels);
G2, Excessive group convolution increase MAC (do not use or use less group convolutions);
G3, Network fragmentation reduces degree of parallelism (small stacked convolutions with in blocks and branches in NasNet);
G4, Element-wise operations are non-negligible (like ReLU and addition operations in residual block).
As described in the original paper, ShuffleNet v1 violates G2 (group convolutions) and G1 (bottleneck blocks), MobileNet v2 violates G1 (inverted bottleneck structure) and G4 (ReLU on “thick” feature maps), and NasNet violates G3 (too many branches).
So the problem is:
How to maintain a large number and equally wide channels with neither dense convolution nor too many groups?
We mention that all the above guidelines have been proved by a series of validation experiments. Let’s draw the building blocks of ShuffleNet v2 here (actually I’ve also drawn a table for ShuffleNet v2 structure here, but takes time to understand…)
How does it solve the problem?
• First, the channel split divide the input channels into two parts, one of them keeps untouched, the other experiences a 1x1 + DW3x3 + 1x1 flowchart, here, the 1x1 doesn’t use group convolution. On one hand to follow G2, on the other hand, two branches indicates two groups.
• Second, the two branches are merged by concatenation. By doing so, there is no add operations (follows G4), and all the ReLU and depth-wise convolutions only exist in half of all the input channels, which again follows G4.
• Then, after concatenation, channel shuffling is applied to enforce branch communication. In addition, the Concat + Shuffle + Split pipeline can be merge into a single element-wise operation, which follows G4.
• Similar to DenseNet, it takes the advantage of feature reuse.
Under the same FLOPs, ShuffleNet v2 is superior than other models.
## Conclusion
We have analysed several classical network compression models, from which we can see that the main strategy to reduce model size and computation cost is using Depth-wise convolution, Group convolution and Point-wise convolution.
There are other interesting algorithms like network pruning, network quantization (e.g., binarize weiths and activations) and Network architecture search. They also lead to fast and small network models and will be discussed in the next post.
Note: Most of the figures are directly copied from the original paper.
• This post is under the license BY-NC-SA.
0% | {} |
mersenneforum.org Msieve with MPI block Lanczos
Register FAQ Search Today's Posts Mark Forums Read
2010-06-21, 09:34 #2 Brian Gladman May 2008 Worcester, United Kingdom 2·263 Posts Hi Jason, Does this have any implications for building msieve on Windows that I need to do anything about? Brian
2010-06-21, 11:24 #3 jasonp Tribal Bullet Oct 2004 33·131 Posts When the code moves into the trunk, using MPI will require defining HAVE_MPI somewhere, but no changes to files. You'll also need to locate the include files and libraries to some kind of MPI distribution, but there are many such and more than one runs on windows. I have no idea what to do about that, short of specializing on one library.
2010-06-22, 05:04 #4 frmky Jul 2003 So Cal 7×13×23 Posts Here are some observations. I used a 17.3M matrix for these timings, and the times are ETA hours as reported by the client. The timing was done using up to eight nodes of our small cluster. Each node contains a single quad-core Core 2 processor and DDR2 memory, and the nodes are connected by gigabit ethernet. The time required depends strongly on both the number of MPI processes running and how those processes are distributed on the cluster. I allocated 1 MPI process per node, 2 MPI processes per node, or 4 MPI processes per node. For the 2 processes/node case, I ordered them by slot (0 and 1 on the same node, 2 and 3 on the same node, etc.) or by node (for eight processes, 0 and 4 on the same node, 1 and 5, 2 and 6, and 3 and 7). The threaded version of msieve with 4 threads reported an ETA of 1166 hours. Code: Processes 1 process/node 2 proc/node by slot 2 proc/node by node 4 proc/node 4 807 968 1209 1252 8 701 781 1002 938 MPI adds a bit of overhead compared to the threaded version when running on a single node (1252 hours for MPI vs. 1166 hours for the threaded version). Splitting these across nodes decreases the runtime. Interestingly, for the 2 proc/node case, how the processes are ordered makes a BIG difference. Adjacent pieces of the matrix apparently should be on the same node. And fastest of all is spreading the processes out so that there's only one on each node. I'm building a matrix now for 16 processes to get timings for the 2 proc/node and 4 proc/node cases.
2010-06-22, 07:17 #5 frmky Jul 2003 So Cal 82D16 Posts Here's the table with the 16 process line included: Code: Processes 1 process/node 2 proc/node by slot 2 proc/node by node 4 proc/node 4 807 968 1209 1252 8 701 781 1002 938 16 654 672 774
2010-06-22, 13:50 #6 jasonp Tribal Bullet Oct 2004 DD116 Posts Great to see that we actually get a speedup! Some more observations: I've modified the matrix building code to write a 'fat index file' to pick the split points in the matrix. Now you can restart an MPI run with a different number of processes without having to rebuild the matrix. You can also build the initial matrix with only one process and then restart with the whole cluster, which should avoid wasting the whole cluster's time while the build takes place on one node, assuming that node has enough memory for the whole matrix. A problem I noticed on my local machine is that mpirun apparently catches signals, so that executing mpirun and hitting Ctrl-C does not make the LA write a checkpoint. Sending SIGINT directly to the msieve 'head node' process (i.e. the one whose working set is largest :) does seem to work however.
2010-06-22, 15:11 #7
R.D. Silverman
Nov 2003
22·5·373 Posts
Windows
Quote:
Originally Posted by jasonp Great to see that we actually get a speedup! Some more observations: .
I have an observation about Windows.
I have my own solver that reads matrices and writes output using CWI
formats. However, it is slower than the CWI solver (I really need to find
the time to optimize it), so I use the CWI solver.
I am currently doing the matrix for 2,1191+. This matrix has 7.9 million
columns and the matrix weight is 521.9 million. The matrix occupies
However, this matrix is going to take 830 hours to solve. This is
astonishingly slow. I have solved just slightly smaller
matrices, but they fit in 2 Gbytes.
It seems that crossing the 2GB threshold caused something to happen
that slows the code down. I am running on a Core-2 Duo laptop with
4GB DDR2 under Windows XP-64. The clock rate is 2.4GHz.
This could be caused by something architectural in my laptop. --> i.e.
it now needs to physically access 2 DIMMs to run the code, it could be
something in the compiler (To run with more than 2GB in a single process
one must turn the LARGEADDRESSAWARE flag on), or it could be something
in the way Windows XP-64 manages large processes.
The process is not paging. It is fully memory resident.
Paul Leyland is also using the same CWI solver. He reports much faster
code, but he is running under Linux. His machine may also be faster.
But 830 hours for a 7.9M row matrix seems much much too long a time.
Does anyone have any ideas/experience as to what might be causing
the slowdown?
Last fiddled with by R.D. Silverman on 2010-06-22 at 15:13 Reason: typo
2010-06-22, 15:43 #8
joral
Mar 2008
5×11 Posts
Quote:
Originally Posted by R.D. Silverman This could be caused by something architectural in my laptop. --> i.e. it now needs to physically access 2 DIMMs to run the code, it could be something in the compiler (To run with more than 2GB in a single process one must turn the LARGEADDRESSAWARE flag on), or it could be something in the way Windows XP-64 manages large processes.
In general, I have never seen differences in performance caused by accessing multiple DIMMs.
Is your code compiled as a 32-bit application? If so, there could be some performance hit with in-memory swapping. (As I recall, normally Windows gives 2 GB for user accessible memory, and then 2 GB is reserved for windows when running 32-bit on windows 64) It's been a little while since I've done any windows programming though, so I could be off.
2010-06-22, 16:12 #9
TheJudger
"Oliver"
Mar 2005
Germany
11×101 Posts
Hi Jason,
Quote:
Originally Posted by jasonp A problem I noticed on my local machine is that mpirun apparently catches signals, so that executing mpirun and hitting Ctrl-C does not make the LA write a checkpoint. Sending SIGINT directly to the msieve 'head node' process (i.e. the one whose working set is largest :) does seem to work however.
which MPI implementation are you using?
Depending on your code sending a SIGINT to the 'head node' process (do you mean 'MPI rank 0'?) might result in an unclean termination of your parallel job (usually this is not a problem, sometimes you'll have some unterminated ranks left, depending on MPI implementation. E.g. mpich often doesn't "detect" when one rank dies and doesn't kill the other ranks...).
Oliver
2010-06-22, 16:23 #10
R.D. Silverman
Nov 2003
22×5×373 Posts
Quote:
Originally Posted by joral In general, I have never seen differences in performance caused by accessing multiple DIMMs. Is your code compiled as a 32-bit application? If so, there could be some performance hit with in-memory swapping. (As I recall, normally Windows gives 2 GB for user accessible memory, and then 2 GB is reserved for windows when running 32-bit on windows 64) It's been a little while since I've done any windows programming though, so I could be off.
Thanks for the heads up. This possibility had not occured to me.
I will see if I can set project options so that the target platform is
Win64 and then recompile.
I am using Visual Studio 2010. Its UI is new and I will have to search to find out how to set 'Win64'. Currently, when I open project settings, the target
Thanks for the feedback. I am not sure if it will make a difference, but
I will try.
I should also probably re-compile my sieve code.
2010-06-22, 17:08 #11
jasonp
Tribal Bullet
Oct 2004
33·131 Posts
Quote:
Originally Posted by TheJudger which MPI implementation are you using? Depending on your code sending a SIGINT to the 'head node' process (do you mean 'MPI rank 0'?) might result in an unclean termination of your parallel job (usually this is not a problem, sometimes you'll have some unterminated ranks left, depending on MPI implementation. E.g. mpich often doesn't "detect" when one rank dies and doesn't kill the other ranks...).
This is with OpenMPI. I don't have the means to get more than one machine working on this at home, so it doesn't matter a great deal which one I use. I can see the handling of signals varying from one set of MPI middleware to the other; maybe I can even configure mpirun to do what I want. The modified LA does explicitly abort after a checkpoint is written, to force the other instances to shut down. (Yes, when I say 'head node' I mean rank 0, the one that handles all tasks besides the matrix multiply)
Greg, do you see a performance difference using fewer MPI process but more than one thread per process?
Bob, are you sure your laptop isn't throttling once the memory controller gets pushed hard enough?
Similar Threads Thread Thread Starter Forum Replies Last Post ravlyuchenko Msieve 5 2011-05-09 13:16 Christenson Factoring 39 2011-04-08 09:44 jasonp Msieve 18 2010-02-07 08:33 Andi47 Msieve 7 2009-01-11 19:33 Jeff Gilchrist Msieve 1 2009-01-02 09:32
All times are UTC. The time now is 04:35.
Wed May 12 04:35:58 UTC 2021 up 33 days, 23:16, 0 users, load averages: 1.15, 1.32, 1.49 | {} |
This options block configures the styling of closed captions in the player for desktop browsers.
On iOS and Android, a system settings menu provides exactly the same settings, as these are mandated by the FCC.
### 💡
If you want to control if captions are rendered using the renderer of the browser or the player, set the renderCaptionsNatively property at the global level of `setup()`.
Property Description
backgroundColor string Hex color of the caption characters background
Default: `#000000`
backgroundOpacity number Alpha percentage of the caption characters background
Default: `75`
color string Hex color of the captions text
Default: `#ffffff`
edgeStyle string Method by which the captions characters are separated from their background
Default: `none`
fontFamily string Font Family of the captions text
Default: `sans`
fontOpacity number Alpha percentage of the captions text
Default: `100`
fontSize number Size of the captions text (Will not affect text size when rendering captions via browser)
Default: `15`
windowColor string Hex color of the background of the entire captions area
Default: `#000000`
windowOpacity number Alpha percentage of the background of the entire captions area
Default: `0`
### 📘
When setting caption styles, color must be specified as a hex value. | {} |
# Open balls with radis $>\epsilon$ in a compact metric space
In a compact metric space $(X,d)$, for a given $\epsilon>0$, if $(x_j)_{j \in J}$ is a family of points of $X$ such that the balls $B(x_j, \epsilon)$ are pairwise disjoint, does it automatically follow that $J$ is finite?
Motivation: I was working through an exercise stating that if $(O_i)_{i \in I}$ is a family of disjoint open sets then $I$ is at most countable, so the above problem was the first idea that I had. Eventually I gave up on it and solved my exercise differently but I would like to know if that property even holds.
Assume the contrary that $J$ is not finite.
Let $M$ be the closure of $(x_j)_{j\in J}$. As $M$ is a closed subset of the compactum $X$, itself is compact too. Now, the system $(B(x_j, \epsilon))_{j\in J}$ is an open cover of $M$. Hence, there are $j_1,\dotsc,j_n\in J$ such $$M \subseteq \bigcup_{i=1}^n B(x_{j_i}, \epsilon).$$ Let $j\in J\setminus\{ j_1, \dotsc, j_n \}$. Then, it follows $$x_j \in M \subseteq \bigcup_{i=1}^n B(x_{j_i}, \epsilon).$$ Thus, $B(x_j, \epsilon)$ is not disjoint to $B(x_{j_i}, \epsilon)$ for a suitable $1\le i \le n$.
This property is called "totally bounded". You can use the compactness of X to prove it as follows: Let $S= \{ B(x_j, \epsilon ) \} _{j \in J}$ where $B(x_j, \epsilon) \ne B(X_k, \epsilon)$ when $j \ne k$ .Let $T= \cup \{ B(y, \epsilon /2) | y \in X - \cup S \}$. Now $C=(S \cup \{ T \} ) - \{ \phi \}$ is an open cover of $X$ and it is irreducible: No proper subset of $C$ is a cover of $X$. So $C$ is finite ,because $X$ is compact. So $S- \{ \phi \}$ is finite. So $S$ is finite. So $J$ is finite. If you do not assume $B(x_j, \epsilon) \ne B(x_k, \epsilon)$ for distinct $j,k$ then $J$ can be infinite but I'm sure it's what you meant. | {} |
# Definition:Syllogism
## Definition
A syllogism is an argument with exactly two premises and one conclusion.
## Examples
### Syllogism of Socrates
All men are mortal.
Socrates is a man.
Therefore, Socrates is mortal. | {} |
Symmetry GraphMethod
Problem - 1
Let $n$ be a positive integer, prove that in this series $$\Big\lfloor{\frac{n}{1}}\Big\rfloor, \Big\lfloor{\frac{n}{2}}\Big\rfloor, \Big\lfloor{\frac{n}{3}}\Big\rfloor \cdots \Big\lfloor{\frac{n}{n}}\Big\rfloor$$, there are less than $2\sqrt{n}$ integers distinct. | {} |
Solve this following
Question:
Considering that $\Delta_{0}>\mathrm{P}$, the magnetic moment (in BM) of $\left[\mathrm{Ru}\left(\mathrm{H}_{2} \mathrm{O}\right)_{6}\right]^{2+}$ would be
Solution:
Magnetic moment (in B.M.) of $\left[\mathrm{Ru}\left(\mathrm{H}_{2} \mathrm{O}\right)_{6}\right]^{2+}$ would be; while considering that $\Delta_{0}>P$,
$\mathrm{Ru}_{(44)} ;[\mathrm{Kr}] 4 \mathrm{~d}^{7} 5 \mathrm{~s}^{1} \quad$ (in ground state)
$\Rightarrow$ In $\mathrm{Ru}^{2+} \Rightarrow 4 \mathrm{~d}^{6} \Rightarrow\left(\mathrm{t}_{2} \mathrm{~g}\right)^{6}(\mathrm{eg})^{0}$
$\Rightarrow$ Here number of unpaired electrons in
$\mathrm{Ru}^{2+}=\left(\mathrm{t}_{2} \mathrm{~g}\right)^{6}(\mathrm{eg})^{0}=0$ and Hence
$\mu_{\mathrm{m}}=\sqrt{\mathrm{n}(\mathrm{n}+2)}$ B.M. $=0$ B.M. | {} |
# QFTs with a finite cutoff
I'm wondering what exactly happens if a QFT is regularized with a smooth cutoff, but it is not sent to infinity at the end of the day. I'm certainly thinking about string theory's exponential smoothness around the string scale.
Polchinski says in his introduction to String Theroy vol. 1
In quantum field theory it is not easy to smear out interactions in a way that preserves the consistency of the theory. We know that Lorentz invariance holds to very good approximation, and this means that if we spread the interaction in space we spread it in time as well, with consequent loss of causality or unitarity.
The first thing that came to my mind was the Ostrogradski instability, but the theorem applies to a finite number of higher derivatives. A smooth cutoff however needs an infinite number of them. Another possible issue that I thought of is that the cutoff is normally enforced in Euclidean signature. But this affects loop momenta only, which are being integrated over.
I also found this blog post (paragraph about discretized theories) by Jacques Distler saying only that it requires fine-tuning due the cosmological constant.
So how exactly does a smooth finite cutoff violates unitarity?
One cheap way to see this maybe (as in: mathematical, but that doesn't connect immediately to Polchinki) is that a smooth cut-off in momentum space will necessarily alter the analytic structure of the answer and it's hard to see how this would not affect immediately the unitarity cuts. For instance, add a $$\exp(-\ell^2 \alpha')$$ to a scalar box integral, where $$\ell$$ is the loop momentum (one-loop) and $$\alpha'$$ some inverse mass scale. This shuts off high energy modes smoothly, but it stays in any unitarity cut as the exponential of some on-shell momenta and is not a function you expect from a tree-level graph in your theory, at least not if it's just conventional field theory with finitely many particles. Unitarity tells you that at tree-level you might have only poles. This exponential has a bad divergence at infinity on the complex plane. I'm not sure how this relates directly to Polchinski's "wrapping in time" argument though.
• The paper looks deep, but I need more time to digest it. Thanks! About loops: When performing the sewing construction (Polchinksi chapter 9.4) to reduce the genus 1 closed bosonic string amplitude to a genus 0 amplitude, I got (after a sketchy calculation) a factor of $4^{-\alpha' \ell^2}$ for the tachyon propagator from the integration over the moduli. The momentum cutoff translates to a truncation of the Schwinger proper time. This is actually how I started to think about QFTs with a finite cutoff. It would be interesting to see the connection to the infinite tower of states you mentioned. – and008 Mar 1 at 13:33
• @and008 hmm I'm not sure that this is a factor that you would get from that computation. You might have things like $\exp(\alpha' \tau (\ell^2-4/\alpha'))$ where $\tau$ has the usual UV cutoff at the lower side of the fundamental domain, but it's not really in connection with the loop-momentum part. Concerning your first comment, I'm not sure what to say. If the analytic structure at tree-level is not the one correpsonding to unitarity, you're in trouble... If you insert another function with a heaviside step for instance, you'll change the analytic structure of your answer and that's also bad – picop Mar 2 at 14:18
• You were right, in doing a more thorough calculation, I got for the tachyon partition function (in the $\alpha' \rightarrow 0$ limit) $Z=-\int d^Dp \Gamma(0,p^2+m^2)$, where $m^2=-4/\alpha'$. Interestingly, this corresponds exactly to the Lagrangian $(1)$ and $(8)$ in 1803.08827 (up to constants). But they didn't write where they got it from. – and008 Mar 5 at 11:10 | {} |
# 37 (number)
← 36 37 38 →
Cardinalthirty-seven
Ordinal37th
(thirty-seventh)
Factorizationprime
Prime12th
Divisors1, 37
Greek numeralΛΖ´
Roman numeralXXXVII
Binary1001012
Ternary11013
Octal458
Duodecimal3112
37 (thirty-seven) is the natural number following 36 and preceding 38.
## In mathematics
${\displaystyle p={\frac {x^{3}-y^{3}}{x-y}}\qquad \left(x=y+1\right).}$
• 37 and 38 are the first pair of consecutive positive integers not divisible by any of their digits.
• 37 appears in the Padovan sequence, preceded by the terms 16, 21, and 28 (it is the sum of the first two of these).[8]
• Since the greatest prime factor of 372 + 1 = 1370 is 137, which is substantially more than 37 twice, 37 is a Størmer number.[9]
• 37*(1+1+1) = 111
• 37*(2+2+2) = 222
• 37*(3+3+3) = 333
• 37*(4+4+4) = 444
• 37*(5+5+5) = 555
• 37*(6+6+6) = 666
• 37*(7+7+7) = 777
• 37*(8+8+8) = 888
• 37*(9+9+9) = 999
## In sports
José María López used this number during his successful years in the World Touring Car Championship from 2014 until 2016. He still uses this number in Formula E since joining in 2016-17 season with DS Virgin Racing.
## In other fields
House number in Baarle (in its Belgian part)
Thirty-seven is: | {} |
# necessary and sufficient chromatic number $\chi$ of a graph
Which chromatic number $\chi(G)$ (vertex coloring number) is necessary and which is sufficient for the following undirected Graph:
$G = (V, E)$ with
$V = \{1,2,3,4,5,6,7\}$ and
$E = \{\{1,2\}, \{1,6\}, \{2,3\}, \{2,4\}, \{3, 5\}, \{3,7\}, \{4,6\}, \{5,7\}, \{6,7\}\}$
I know that the Graph $G$ can be colored with at least 3 colors without conflicts, for instance:
Vertices 1, 3, 4 (red)
Vertices 2, 5, 6 (green)
Vertex 7 (blue)
So $\chi(G) = 3$ is the necessary chromatic number or is $\chi(G) \le 3$ is necessary, since $1$ and $2$ are necessary for $3$?
A different coloring could be, that each vertex has a unique color. In this case $\chi(G) = 7$ would be the sufficient chromatic number since there are $7$ vertices. Or is $\chi(G) \le 7$ the sufficient number, since the graph could be colored also with $4, 5$ or $6$ colors?
Could anyone tell me please, what is wrong and what is right?
Thanks
• 3 is necessary because (3,5,7) form a triangle. So there cannot be a 2 coloring (try coloring a triangle with 2 colors). 3 is also sufficient, since you have a valid coloring with 3 colors. 7 is also sufficient, but you can do better with 3. – Artimis Fowl Dec 7 '17 at 5:48
We can say "$k$ colors are necessary to (properly) color $G$": this means you cannot color $G$ with fewer than $k$ colors. For clarity, you can say "At least $k$ colors are necessary to color $G$", but this means exactly the same thing. In your particular example, we can make statements such as:
• $1$ color is necessary, because $G$ has vertices, and they need to be given colors.
• $2$ colors are necessary, because vertices $1$ and $2$ are adjacent, so just coloring them requires two different colors.
• $3$ colors are necessary, because any two of the vertices $3$, $5$, and $7$ are adjacent, so coloring these three vertices requires three different colors.
(In general, there could be subtler reasons why $k$ colors are necessary, but that's all we've got in this example.)
We can say "$k$ colors are sufficient to (properly) color $G$": this means that there exists a coloring with $k$ colors. In your example, we can say:
• $3$ colors are sufficient, because there is a coloring of $G$ with three colors: color vertices $1,3,4$ red, vertices $2,5,6$ green, and vertex $7$ blue.
• $4$ colors are sufficient, because the above coloring is also a coloring of $G$ with the four colors red, green, blue and yellow. It happens never to use yellow.
• The same argument shows why larger number of colors are sufficient, too.
• Another reason why $7$ colors are sufficient is that we can give every vertex its own color. (In general, for an $n$-vertex graph, $n$ colors are always sufficient.)
But there is no such thing as the "necessary chromatic number" or "sufficient chromatic number" of a graph. The chromatic number $\chi(G)$ of a graph $G$ always refers to the least number of colors we can use to color $G$, so in this example $\chi(G)=3$. It's always exactly one number (even when we don't know it yet, and even if we're thinking about suboptimal colorings that use a different number of colors). To prove that $\chi(G)=3$, we need to show that $3$ colors are sufficient (there is a $3$-coloring of $G$) and that $3$ colors are necessary (we can't color $G$ with fewer colors).
The statement "$k$ colors are necessary to color $G$" is equivalent to saying that $\chi(G) \ge k$, and the statement "$k$ colors are sufficient to color $G$" is equivalent to saying that $\chi(G) \le k$. But that's just a way of putting a sentence into mathematical notation. | {} |
Compactness of Hardy-Type Operators over Star-Shaped Regions in $\mathbb{R}^N$ We study a compactness property of the operators between weighted Lebesgue spaces that average a function over certain domains involving a star-shaped region. The cases covered are (i) when the average is taken over a difference of two dilations of a star-shaped region in $\RR^N$, and (ii) when the average is taken over all dilations of star-shaped regions in $\RR^N$. These cases include, respectively, the average over annuli and the average over balls centered at origin. Keywords:Hardy operator, Hardy-Steklov operator, compactness, boundedness, star-shaped regionsCategories:46E35, 26D10 | {} |
What is the file name of Helvetica font used by PSTricks grid labels?
This is related to Herbert's comment in answer for Which one is recommended to get cropped PDF and EPS graphics?.
He said I can use dvips -h hv______.pfb input.dvi to embed font to the resulting PS file. But I don't know which font file corresponds to the font used by PSTricks grid labels.
I have a list of font files with names begin with h as follows, but I don't know which one I must use. :-)
Question: What is the file name of Helvetica font used by PSTricks grid labels?
Note: I am using Windows 7 with TeX Live 2010 installed and my workflow is latex->dvips->ps2pdf. You may need this info.
-
2 Answers
The Helvetica font is just one of the 35 predefined fonts. You can probably get through by saying
\psset[pstricks]{gridfont=NimbusSanL-Regu}
in your file and then run dvips with the option
-h uhvr8a.pfb
that will load the URW clone of Helvetica which is provided in TeX Live.
-
\psset{gridfont=helvetica} should work without the additional header using for dvips. I have these fonts in my example pdf:
voss@shania:~/Documents> pdffonts latex6.pdf
name type emb sub uni object ID
------------------------------------ ----------------- --- --- --- ---------
SVYBYQ+helvetica Type 1C yes yes no 8 0
AQAEUC+CMR10 Type 1C yes yes no 11 0
-
That's kind of magic, isn't it? – egreg Jul 7 '11 at 13:27
In my machine, it does not work. – xport Jul 7 '11 at 13:45
then it is a "problem" :-) with my configuration. However, the URW fonts should be embedded by default with a standard TeXLive installation – Herbert Jul 7 '11 at 14:04
The combination of \psset[pstricks]{gridfont=...} and ps2pdf -dPDFSETTINGS#/prepress %1.ps is the best solution to embed font to the PDF. No need to explicity use header switch as in dvips -h hv______.pfb input.dvi. – xport Jul 7 '11 at 14:40 | {} |
• Create Account
# How do I find hash value of a 3D vector ?
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
14 replies to this topic
### #1brainydexter Members - Reputation: 158
Like
1Likes
Like
Posted 05 April 2010 - 03:12 PM
I am trying to perform broad-phase collision detection with a fixed-grid size approach. Thus, for each entity's position: (x,y,z) (each of type float), I need to find which cell does the entity lie in. I then intend to store all the cells in a hash-table and then iterate through to report (if any) collisions. So, here is what I am doing: Grid-cell's position: (int type) (Gx, Gy, Gz) => (x / M, y / M, z / M) where M is the size of the grid. Once, I have a cell, I'd like to add it to a hash-table with its key being a unique hash based on (Gx, Gy, Gz) and the value being the cell itself. Now, I cannot think of a good hash function and I need some help with that. Can someone please suggest me a good hash function? Thanks
### #2sinalta Members - Reputation: 257
Like
0Likes
Like
Posted 05 April 2010 - 04:09 PM
The simplest option I can think of the same idea as how you determine the index for a 1D array that you are treating as 3D array (which I worked out the other day handily enough [grin]).
I probably didn't explain that very well, so something like this:
size_t gridCellsWide = ...;size_t gridCellsHigh = ...;size_t gridCellsDeep = ...;float gridSize = ...;Vector3f position = ...;// encodesize_t x = static_cast<size_t>( position.x / gridSize );size_t y = static_cast<size_t>( position.y / gridSize );size_t z = static_cast<size_t>( position.z / gridSize );size_t hash = x + ( y * gridCellsWide ) + ( z * gridCellsWide * gridCellsHigh );// decodex = hash % ( gridCellsWide * gridCellsHigh ) % gridCellsWide y = ( hash % ( gridCellsWide * gridCellsHigh ) ) / gridCellsWide z = hash / ( gridCellsWide * gridCellsHigh )Vector3f cellCenterPoint;cellCenterPoint.x = ( x * gridSize ) + ( gridSize * 0.5f );cellCenterPoint.y = ( y * gridSize ) + ( gridSize * 0.5f );cellCenterPoint.z = ( z * gridSize ) + ( gridSize * 0.5f );
That hash value would actually be the position of your grid cell if you stored the grids as a 1D array. You probably don't if you have a predetermined grid size, but I often don't or I'm dealing with image data or something similar, meaning I have to decode from 1D to 2D/3D and back again.
Because the hash value is just the 1D index of the cell it will be unique per cell. Not per object, but since you're using this for determining collision pairs it shouldn't be anyway.
Hope that helps [smile]
### #3brainydexter Members - Reputation: 158
Like
0Likes
Like
Posted 05 April 2010 - 06:02 PM
Thanks a lot for the reply diablos_blade. I appreciate you posting some code as well.
I understand what you are saying and it is I think a simple yet effective technique. However, here is what concerns me.. I've assumed the world to be of infinite length and width. So, I really don't know how many cells would there be along the length/width.
Thus, how do you recommend coming up with values for gridCellsWide, gridCellsDeep ?
### #4sinalta Members - Reputation: 257
Like
0Likes
Like
Posted 05 April 2010 - 06:50 PM
Quote:
Original post by brainydexterI understand what you are saying and it is I think a simple yet effective technique. However, here is what concerns me.. I've assumed the world to be of infinite length and width. So, I really don't know how many cells would there be along the length/width.
Ah, now that does pose a problem for a a simple grid system =P, and not one I've tackled myself. It depends on the type of game of course but it seems to be solved by more of a heirarchical system.
Abusing the fact that "infinite" is technically impossible, the world is first split up into a (very) large grid, maybe thousands of meters in size for each cell, then these grids are split down into the fine grid system that we're talking about here, they can also be streamed in and out of memory as the player moves around. But that technique is pretty much an open-world game only idea.
On the other hand you could do something hacky like:
update all entity positions
find minimum and maximum entity positions
define grid bounds (width, height, depth) from the range of positions
hash all entities
The reasons I say thats a hacky options is because it means you can't cache the hash values between frames, because the grid could be a completely different size from one frame to the next. It also means constantly re-hashing static objects, which is just unnecessary work. As well as potentially a bunch of extra work depending on the number of entities. You could do a couple of things to ease the work load of such a system, but all in all, not nice at all =P
What type of game are you making? Spatial hashing is rather difficult in general without having some limit on the world, so knowing that would be a lot of help in helping you figure out a solution, even though I'll admit I'm pretty much out of my area of experience if you're planning a psuedo infinite open-world game. [wink]
### #5brainydexter Members - Reputation: 158
Like
0Likes
Like
Posted 05 April 2010 - 07:29 PM
Ah, thanks for pointing that out. I was actually thinking infinite only from a theoretical perspective.
Now, I am making a FPS genre game which will not have a really large infinite world. It will have a finite-level, much like you had in Quake/Unreal Tournament. But, I don't have that in the game yet. In fact, all I am trying to do is a broad-phase for players and the bullets they fire.
So, lets keep the level out of this for now.. To answer my question about infinite length/width, what if we assume (for now) a large number..say 100 (in grid units, i.e. 100 grid units long and 100 grid units deep). Now, what you suggested earlier can be applied with this assumption. This assumption would no doubt break, if any of the players run outside this huge world space (i.e. 100 * GridcellSize). But, if that would happen, players would really be far far away from each other and I don't think that would happen (atleast for now).
If all fails, I will choose my world bounds one cell less than the max world size (i.e. 99 * GridCellsize), and put a check in there which will not let them move beyond it!
Once, I have the level-geometry, I can query the extents of that and use those to calculate the grid length/depth.
One more thing, how did you handle negative position values ??
### #6sinalta Members - Reputation: 257
Like
0Likes
Like
Posted 05 April 2010 - 07:52 PM
Quote:
Original post by brainydexterAh, thanks for pointing that out. I was actually thinking infinite only from a theoretical perspective.Now, I am making a FPS genre game which will not have a really large infinite world. It will have a finite-level, much like you had in Quake/Unreal Tournament. But, I don't have that in the game yet. In fact, all I am trying to do is a broad-phase for players and the bullets they fire.So, lets keep the level out of this for now.. To answer my question about infinite length/width, what if we assume (for now) a large number..say 100 (in grid units, i.e. 100 grid units long and 100 grid units deep). Now, what you suggested earlier can be applied with this assumption. This assumption would no doubt break, if any of the players run outside this huge world space (i.e. 100 * GridcellSize). But, if that would happen, players would really be far far away from each other and I don't think that would happen (atleast for now). If all fails, I will choose my world bounds one cell less than the max world size (i.e. 99 * GridCellsize), and put a check in there which will not let them move beyond it!
Also known as invisible walls =P
Keeping players within level bounds is pretty much a solved problem. Outdoor levels you put an invisible wall somewhere, keeping players away from the edges by designing the level so the main focus points are nowhere near them. With indoor levels... you have actual walls =P
Quote:
Original post by brainydexterOnce, I have the level-geometry, I can query the extents of that and use those to calculate the grid length/depth.What do you think about this ?
Sounds like the right idea to me [smile]
Quote:
Original post by brainydexterOne more thing, how did you handle negative position values ??
Easily:
Entity entity = ...;Vector3f gridMinimumBounds = ...; // most likely a -ve value for each axis, assuming the grid is centered around 0,0,0. But any value would be fine.Vector3f offsetPosition = entity.Position( ) - gridMinimumBounds;size_t hash = PositionHash( offsetPosition );StoreEntity( hash, entity );
As you can see, the actual position value you hash is the offset from the minimum bounds of the grid, which is one of the things you can query when loading in the level data. This offset should always be positive since it should be impossible to leave your level [smile]
### #7EJH Members - Reputation: 315
Like
0Likes
Like
Posted 06 April 2010 - 12:53 AM
Here's another way to convert a position to a grid cell. Works for positive or negative numbers without division or mod.
// sector size: size of a square grid cell// min and max: define the min and max points of the grid// e.g. min:-10 max:10 defines a grid from -10,-10 to 10,10// gridWidth: number of cells across the grid// conversionFactor: to avoid division in the hash functionfloat sectorSize = 1000f;float min = -10000f;float max = 10000f;int gridWidth = (int)((max - min) / sectorSize);float conversionFactor = 1f / sectorSize;// how to get x,y grid cell of a given pointVector2 point;int cellX = (int)((position.X + max) * conversionFactor);int cellY = (int)((position.Y + max) * conversionFactor);
### #8thefries Members - Reputation: 103
Like
0Likes
Like
Posted 08 April 2010 - 11:06 PM
First i discretize x, y and z by dividing them by the cell size and casting them to integers. Then i apply this hash function:
uint32 hash = (((uint32)pos.x) * 73856093) ^ (((uint32)pos.y) * 19349663) ^ (((uint32)pos.z) * 83492791);
so x, y and z cast to unsigned, i don't care about the wrap around, then they are multiplied by a large prime number and then XORed together. This gives a relatively nice distribution.
### #9brainydexter Members - Reputation: 158
Like
0Likes
Like
Posted 09 April 2010 - 08:04 PM
@ Everyone: Thanks a lot for your replies. I appreciate all the help. This is what I ended up using:
I assume that is the world is bounded by -Extents to Extents (in Grid units); => Array is essentially of length 2 * Extents.
So, after I find the Grid Position, I move the coordinate system from (-Extent == +Extents) to (0 to 2 * Extents). Thus the hash key has the added multiplying factor of 2.
glm::vec3 worldPosn = l_Corners[k]; // corner of the bounding boxfloat invMaxRadius = 1 / (2.0f * Player::MaxRadius);glm::vector::uvec3 gridPosn = (glm::vector::uvec3)(worldPosn * invMaxRadius);// translating gridPosn from -XTents to XTentsgridPosn.x += GridCellsWide;gridPosn.y += GridCellsHigh;gridPosn.z += GridCellsDeep;std::size_t hashKey = gridPosn.x + (gridPosn.y * (2 * GridCellsWide) ) + (gridPosn.z * GridCellsWide * GridCellsHigh * 4);
I think this should work.. At the moment, I'm I am getting a weird error with object being deleted and I'm trying to fix that, so I have no way to see if this is working or not. So, what do you guys think about this ??
Thanks!
### #10thefries Members - Reputation: 103
Like
0Likes
Like
Posted 10 April 2010 - 03:30 AM
That looks a lot more like a conversion from a 3d array index into a 1d array index than a hash key.
### #11brainydexter Members - Reputation: 158
Like
0Likes
Like
Posted 10 April 2010 - 07:27 AM
@theFries: Which I think makes up a candidate for a good hash key :) ?
### #12kloffy Members - Reputation: 1192
Like
0Likes
Like
Posted 10 April 2010 - 07:40 AM
I can't really comment on whether your approach gives good hash values. However, the primary concern when generating hashes should be to minimize hash collisions. This is usually accomplished with a uniform distribution of hash values, e.g. as suggested by thefries.
### #13Valkoran Members - Reputation: 108
Like
0Likes
Like
Posted 11 April 2010 - 03:59 AM
You guys are blowing this out of proportion. Minimize collisions? Give me a break. The only time you will get a collision with the presented method is when you have two of the same coordinates, or you try to mix and match different grids together.
### #14sinalta Members - Reputation: 257
Like
0Likes
Like
Posted 11 April 2010 - 05:56 AM
I'll agree the method I provided isn't a typical hashing function. I did infact say that it is just a method of converting a 3D index into a 1D index.
The reason I suggested it as a solution is because the only time you will get a hash collision is when two objects are in the same grid cell, which is what I understood was wanted. Using this you could now iterate over your hash table and only check for collisions between objects which have the same hash value. Depending on how the hash table is structured it could end up being faster than iterating over all grid cells and testing collisions or iterating over each object, getting the grid cell it is in and then iterating over those.
Of course if this isn't what you wanted to use the hash value for, then you should probably ignore my idea and go for something similar to thefries method. [smile]
### #15brainydexter Members - Reputation: 158
Like
0Likes
Like
Posted 11 April 2010 - 05:59 AM
@diablos: Nope, this is exactly what I wanted and it is proving out to be quite useful. I've got it working with players already! Time to get the bullets also integrated!!
Thanks for all the comments guys.
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
PARTNERS | {} |
Next: , Previous: , Up: Advanced usage [Contents][Index]
#### 3.2.2 Axis and ticks
MathGL library can draw not only the bounding box but also the axes, grids, labels and so on. The ranges of axes and their origin (the point of intersection) are determined by functions SetRange(), SetRanges(), SetOrigin() (see Ranges (bounding box)). Ticks on axis are specified by function SetTicks, SetTicksVal, SetTicksTime (see Ticks). But usually
Function axis draws axes. Its textual string shows in which directions the axis or axes will be drawn (by default "xyz", function draws axes in all directions). Function grid draws grid perpendicularly to specified directions. Example of axes and grid drawing is:
int sample(mglGraph *gr)
{
gr->SubPlot(2,2,0); gr->Title("Axis origin, Grid"); gr->SetOrigin(0,0);
gr->Axis(); gr->Grid(); gr->FPlot("x^3");
gr->SubPlot(2,2,1); gr->Title("2 axis");
gr->SetRanges(-1,1,-1,1); gr->SetOrigin(-1,-1,-1); // first axis
gr->Axis(); gr->Label('y',"axis 1",0); gr->FPlot("sin(pi*x)");
gr->SetRanges(0,1,0,1); gr->SetOrigin(1,1,1); // second axis
gr->Axis(); gr->Label('y',"axis 2",0); gr->FPlot("cos(pi*x)");
gr->SubPlot(2,2,3); gr->Title("More axis");
gr->SetOrigin(NAN,NAN); gr->SetRange('x',-1,1);
gr->Axis(); gr->Label('x',"x",0); gr->Label('y',"y_1",0);
gr->FPlot("x^2","k");
gr->SetRanges(-1,1,-1,1); gr->SetOrigin(-1.3,-1); // second axis
gr->Axis("y","r"); gr->Label('y',"#r{y_2}",0.2);
gr->FPlot("x^3","r");
gr->SubPlot(2,2,2); gr->Title("4 segments, inverted axis");
gr->SetOrigin(0,0);
gr->InPlot(0.5,1,0.5,1); gr->SetRanges(0,10,0,2); gr->Axis();
gr->FPlot("sqrt(x/2)"); gr->Label('x',"W",1); gr->Label('y',"U",1);
gr->InPlot(0,0.5,0.5,1); gr->SetRanges(1,0,0,2); gr->Axis("x");
gr->FPlot("sqrt(x)+x^3"); gr->Label('x',"\\tau",-1);
gr->InPlot(0.5,1,0,0.5); gr->SetRanges(0,10,4,0); gr->Axis("y");
gr->FPlot("x/4"); gr->Label('y',"L",-1);
gr->InPlot(0,0.5,0,0.5); gr->SetRanges(1,0,4,0); gr->FPlot("4*x^2");
return 0;
}
Note, that MathGL can draw not only single axis (which is default). But also several axis on the plot (see right plots). The idea is that the change of settings does not influence on the already drawn graphics. So, for 2-axes I setup the first axis and draw everything concerning it. Then I setup the second axis and draw things for the second axis. Generally, the similar idea allows one to draw rather complicated plot of 4 axis with different ranges (see bottom left plot).
At this inverted axis can be created by 2 methods. First one is used in this sample – just specify minimal axis value to be large than maximal one. This method work well for 2D axis, but can wrongly place labels in 3D case. Second method is more general and work in 3D case too – just use aspect function with negative arguments. For example, following code will produce exactly the same result for 2D case, but 2nd variant will look better in 3D.
// variant 1
gr->SetRanges(0,10,4,0); gr->Axis();
// variant 2
gr->SetRanges(0,10,0,4); gr->Aspect(1,-1); gr->Axis();
Another MathGL feature is fine ticks tunning. By default (if it is not changed by SetTicks function), MathGL try to adjust ticks positioning, so that they looks most human readable. At this, MathGL try to extract common factor for too large or too small axis ranges, as well as for too narrow ranges. Last one is non-common notation and can be disabled by SetTuneTicks function.
Also, one can specify its own ticks with arbitrary labels by help of SetTicksVal function. Or one can set ticks in time format. In last case MathGL will try to select optimal format for labels with automatic switching between years, months/days, hours/minutes/seconds or microseconds. However, you can specify its own time representation using formats described in http://www.manpagez.com/man/3/strftime/. Most common variants are ‘%X’ for national representation of time, ‘%x’ for national representation of date, ‘%Y’ for year with century.
The sample code, demonstrated ticks feature is
int sample(mglGraph *gr)
{
gr->SubPlot(3,2,0); gr->Title("Usual axis"); gr->Axis();
gr->SubPlot(3,2,1); gr->Title("Too big/small range");
gr->SetRanges(-1000,1000,0,0.001); gr->Axis();
gr->SubPlot(3,2,3); gr->Title("Too narrow range");
gr->SetRanges(100,100.1,10,10.01); gr->Axis();
gr->SubPlot(3,2,4); gr->Title("Disable ticks tuning");
gr->SetTuneTicks(0); gr->Axis();
gr->SubPlot(3,2,2); gr->Title("Manual ticks"); gr->SetRanges(-M_PI,M_PI, 0, 2);
mreal val[]={-M_PI, -M_PI/2, 0, 0.886, M_PI/2, M_PI};
gr->SetTicksVal('x', mglData(6,val), "-\\pi\n-\\pi/2\n0\nx^*\n\\pi/2\n\\pi");
gr->Axis(); gr->Grid(); gr->FPlot("2*cos(x^2)^2", "r2");
gr->SubPlot(3,2,5); gr->Title("Time ticks"); gr->SetRange('x',0,3e5);
gr->SetTicksTime('x',0); gr->Axis();
return 0;
}
The last sample I want to show in this subsection is Log-axis. From MathGL’s point of view, the log-axis is particular case of general curvilinear coordinates. So, we need first define new coordinates (see also Curvilinear coordinates) by help of SetFunc or SetCoor functions. At this one should wary about proper axis range. So the code looks as following:
int sample(mglGraph *gr)
{
gr->SubPlot(2,2,0,"<_"); gr->Title("Semi-log axis");
gr->SetRanges(0.01,100,-1,1); gr->SetFunc("lg(x)","");
gr->Axis(); gr->Grid("xy","g"); gr->FPlot("sin(1/x)");
gr->Label('x',"x",0); gr->Label('y', "y = sin 1/x",0);
gr->SubPlot(2,2,1,"<_"); gr->Title("Log-log axis");
gr->SetRanges(0.01,100,0.1,100); gr->SetFunc("lg(x)","lg(y)");
gr->Axis(); gr->Grid("!","h="); gr->Grid();
gr->FPlot("sqrt(1+x^2)"); gr->Label('x',"x",0);
gr->Label('y', "y = \\sqrt{1+x^2}",0);
gr->SubPlot(2,2,2,"<_"); gr->Title("Minus-log axis");
gr->SetRanges(-100,-0.01,-100,-0.1); gr->SetFunc("-lg(-x)","-lg(-y)");
gr->Axis(); gr->FPlot("-sqrt(1+x^2)");
gr->Label('x',"x",0); gr->Label('y', "y = -\\sqrt{1+x^2}",0);
gr->SubPlot(2,2,3,"<_"); gr->Title("Log-ticks");
gr->SetRanges(0.1,100,0,100); gr->SetFunc("sqrt(x)","");
gr->Axis(); gr->FPlot("x");
gr->Label('x',"x",1); gr->Label('y', "y = x",0);
return 0;
}
You can see that MathGL automatically switch to log-ticks as we define log-axis formula (in difference from v.1.*). Moreover, it switch to log-ticks for any formula if axis range will be large enough (see right bottom plot). Another interesting feature is that you not necessary define usual log-axis (i.e. when coordinates are positive), but you can define “minus-log” axis when coordinate is negative (see left bottom plot).
Next: , Previous: , Up: Advanced usage [Contents][Index] | {} |
• ### Announcements
• #### IE 11 copy/paste problem
It has come to our attention that people using Internet Explorer 11 (IE 11) are having trouble with copy/paste to the forum. If you encounter this problem, using a different browser like Firefox or Chrome seems to get around the problem. We do not know what the problem is, but it seems to be specific to IE 11 and we are hopeful that Microsoft will eventually fix it.
We have found that this new upgrade is somewhat of a disaster. We are finding lots of glitches in being able to post and administer the forum. Additionally, there are new costs associated with the upgrade that we simply cannot afford. As a result, we have decided to reverse course and go back to the previous version of our software. Since this will involve restoring it from a backup, we will lose posts that have been added since January 30 or possibly even some before that. If you started a topic during that time, we urge you to make backups of your posts and you will need to start the topics over again after the change. You can simply paste the copies of your posts that you created at that point. If you joined the forum this month, you will need to re-register since your membership will be lost along with the posts. Since you have a concealed password, we cannot simply restore your membership for you. We are going to backup as much as we can so that it will reduce inconvenience for our members. Unfortunately we cannot back everything up since much will be incompatible with the old version of our software. We apologize for the confusion and regret the need to do this even though it is not viable to continue with this version of the software. We plan to begin the process tomorrow evening and, if it goes smoothly, we shouldn't be offline for very long. However, since we have not done this before, we are not sure how smoothly it will go. We ask your patience as we proceed. EDIT: I have asked our hosting service to do the restore at 9 PM Central time and it looks like it will go forward at that time. Please prepare whatever you need to prepare so that we can restore your topics when the forum is stable again.
Followers 0
## 2 posts in this topic
i was browsing on teh WOW forums and came upon a link, unknowingly, i clicked it, and later on i read in the replies that the link might be a keylogger
link disabled - it's a very bad idea to post suspicious links in the forums. We don't want anyone to click the link and get infected as well.
im not sure if it is a keylogger or not, maybe someone can help me?
Thank you
i already analyized my HJT log with www.hijackthis.de and no suspicious entries popped up, so i think its safe...
my Hijackthis log:
Logfile of HijackThis v1.99.1
Scan saved at 7:54:01 PM, on 6/7/2007
Platform: Windows XP SP2 (WinNT 5.01.2600)
MSIE: Internet Explorer v6.00 SP2 (6.00.2900.2180)
Running processes:
C:\WINDOWS\System32\smss.exe
C:\WINDOWS\system32\winlogon.exe
C:\WINDOWS\system32\services.exe
C:\WINDOWS\system32\lsass.exe
C:\WINDOWS\system32\svchost.exe
C:\WINDOWS\System32\svchost.exe
C:\WINDOWS\system32\spoolsv.exe
C:\WINDOWS\Explorer.EXE
C:\Program Files\Microsoft IntelliPoint\ipoint.exe
C:\PROGRA~1\Keyboard\Ikeymain.exe
C:\Program Files\Java\jre1.6.0\bin\jusched.exe
C:\Program Files\TheWeatherNetwork\WeatherEye\WeatherEye.exe
C:\Program Files\PerSono\PersTray.exe
C:\Program Files\OpenOffice.org 2.2\program\soffice.exe
C:\Program Files\OpenOffice.org 2.2\program\soffice.BIN
C:\Program Files\Mozilla Firefox\firefox.exe
C:\WINDOWS\system32\nvsvc32.exe
C:\WINDOWS\system32\PnkBstrA.exe
C:\WINDOWS\System32\svchost.exe
C:\WINDOWS\System32\svchost.exe
C:\Program Files\Anti-keylogger\Anti-keylogger.exe
C:\Program Files\MSN Messenger\msnmsgr.exe
C:\WINDOWS\System32\svchost.exe
C:\Program Files\HijackThis\HijackThis.exe
R0 - HKCU\Software\Microsoft\Internet Explorer\Main,Start Page = http://sympatico.msn.ca/
R1 - HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings,ProxyOverride = 127.0.0.1
O2 - BHO: flashget urlcatch - {2F364306-AA45-47B5-9F9D-39A8B94E7EF7} - C:\Program Files\FlashGet\jccatch.dll
O2 - BHO: Watch for Browser Events - {42A7CE31-CEE7-4CCE-A060-A44A7E52E062} - C:\PROGRA~1\KEYBOA~1\kie.dll
O2 - BHO: (no name) - {53707962-6F74-2D53-2644-206D7942484F} - C:\Program Files\Spybot - Search & Destroy\SDHelper.dll
O2 - BHO: SSVHelper Class - {761497BB-D6F0-462C-B6EB-D4DAF1D92D43} - C:\Program Files\Java\jre1.6.0\bin\ssv.dll
O2 - BHO: FlashGet GetFlash Class - {F156768E-81EF-470C-9057-481BA8380DBA} - C:\Program Files\FlashGet\getflash.dll
O3 - Toolbar: Yahoo! Toolbar - {EF99BD32-C1FB-11D2-892F-0090271D4F88} - C:\Program Files\Yahoo!\Companion\Installs\cpn\yt.dll
O4 - HKLM\..\Run: [NvCplDaemon] RUNDLL32.EXE C:\WINDOWS\system32\NvCpl.dll,NvStartup
O4 - HKLM\..\Run: [intelliPoint] "C:\Program Files\Microsoft IntelliPoint\ipoint.exe"
O4 - HKLM\..\Run: [iKeyWorks] C:\PROGRA~1\Keyboard\Ikeymain.exe
O4 - HKLM\..\Run: [sunJavaUpdateSched] "C:\Program Files\Java\jre1.6.0\bin\jusched.exe"
O4 - HKLM\..\Run: [Anti-keylogger] C:\Program Files\Anti-keylogger\Anti-keylogger.exe /autorun
O4 - HKCU\..\Run: [WeatherEye] C:\Program Files\TheWeatherNetwork\WeatherEye\WeatherEye
O4 - Startup: OpenOffice.org 2.2.lnk = C:\Program Files\OpenOffice.org 2.2\program\quickstart.exe
O4 - Global Startup: Perstray.lnk = C:\Program Files\PerSono\PersTray.exe
O8 - Extra context menu item: E&xport to Microsoft Excel - res://E:\MSOFFICE\OFFICE11\EXCEL.EXE/3000
O9 - Extra button: (no name) - {08B0E5C0-4FCB-11CF-AAA5-00401C608501} - C:\Program Files\Java\jre1.6.0\bin\ssv.dll
O9 - Extra 'Tools' menuitem: Sun Java Console - {08B0E5C0-4FCB-11CF-AAA5-00401C608501} - C:\Program Files\Java\jre1.6.0\bin\ssv.dll
O9 - Extra button: Research - {92780B25-18CC-41C8-B9BE-3C9C571A8263} - E:\MSOFFICE\OFFICE11\REFIEBAR.DLL
O9 - Extra button: FlashGet - {D6E814A0-E0C5-11d4-8D29-0050BA6940E3} - C:\Program Files\FlashGet\FlashGet.exe
O9 - Extra 'Tools' menuitem: FlashGet - {D6E814A0-E0C5-11d4-8D29-0050BA6940E3} - C:\Program Files\FlashGet\FlashGet.exe
O9 - Extra button: Messenger - {FB5F1910-F110-11d2-BB9E-00C04F795683} - C:\Program Files\Messenger\msmsgs.exe
O9 - Extra 'Tools' menuitem: Windows Messenger - {FB5F1910-F110-11d2-BB9E-00C04F795683} - C:\Program Files\Messenger\msmsgs.exe
O18 - Protocol: livecall - {828030A1-22C1-4009-854F-8E305202313F} - C:\PROGRA~1\MSNMES~1\MSGRAP~1.DLL
O18 - Protocol: msnim - {828030A1-22C1-4009-854F-8E305202313F} - C:\PROGRA~1\MSNMES~1\MSGRAP~1.DLL
O20 - Winlogon Notify: igfxcui - C:\WINDOWS\SYSTEM32\igfxdev.dll
O20 - Winlogon Notify: WgaLogon - C:\WINDOWS\SYSTEM32\WgaLogon.dll
O23 - Service: NVIDIA Display Driver Service (NVSvc) - NVIDIA Corporation - C:\WINDOWS\system32\nvsvc32.exe
O23 - Service: PnkBstrA - Unknown owner - C:\WINDOWS\system32\PnkBstrA.exe
Edited by miekiemoes
##### Share on other sites
Welcome to SWI. We apologize for the delay; our helpers have been very busy. | {} |
# Variable ranking with LASSO in discriminant analysis
### Description
This function implements variable ranking procedure in discriminant analysis using the penalized EM algorithm of Zhou et al (2009) (adapted in Sedki et al (2014) for the discriminant analysis settings).
### Usage
1 SortvarLearn(data, knownlabels, lambda, rho, nbCores)
### Arguments
data matrix containing quantitative data. Rows correspond to observations and columns correspond to variables knownlabels an integer vector or a factor of size number of observations. Each cell corresponds to a cluster affectation. So the maximum value is the number of clusters. lambda numeric listing of tuning parameter for \ell_1 mean penalty rho numeric listing of tuning parameter for \ell_1 precision matrix penalty nbCores number of CPUs to be used when parallel computing is utilized (default is 2)
### Value
vector of integers corresponding to variable ranking.
### Author(s)
Mohammed Sedki mohammed.sedki@u-psud.fr
### References
Zhou, H., Pan, W., and Shen, X., 2009. "Penalized model-based clustering with unconstrained covariance matrices". Electronic Journal of Statistics, vol. 3, pp.1473-1496.
Maugis, C., Celeux, G., and Martin-Magniette, M. L., 2009. "Variable selection in model-based clustering: A general variable role modeling". Computational Statistics and Data Analysis, vol. 53/11, pp. 3872-3882.
Sedki, M., Celeux, G., Maugis-Rabusseau, C., 2014. "SelvarMix: A R package for variable selection in model-based clustering and discriminant analysis with a regularization approach". Inria Research Report available at http://hal.inria.fr/hal-01053784
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 ## Not run: ## Simulated data example as shown in Sedki et al (2014) ## n = 2000 observations, p = 14 variables require(glasso) data(scenarioCor) data.cor <- scenarioCor[,1:14] labels.cor <-scenarioCor[,15] lambda <- seq(20, 50, length = 10) rho <- seq(1, 2, length=2) ## variable ranking in discriminant analysis var.ranking.da <- SortvarLearn(data.cor, labels.cor, lambda, rho) ## End(Not run) | {} |
# Find the first digit after the decimal point
#### anemone
##### MHB POTW Director
Staff member
Determine the first decimal digit after the decimal point in the number $\sqrt{x^2+x+1}$ if $\large x=2014^{2014^{2014}}$
##### Well-known member
Determine the first decimal digit after the decimal point in the number $\sqrt{x^2+x+1}$ if $\large x=2014^{2014^{2014}}$
x is too large $2014^{2014^{2014}}$
so $x^2 + x + 1= (x + 1/2)^2 + 3/4$
= $(x+1/2)^2( 1+ 3/(4(x + 1/2)^2)$
so square root = $(x+1/2) ( 1 + 3/(8(x+1/2)^2) + ....)$
the term $3/(8(x+1/2)^2)$ is extremley small so << .1
so square root is x + 1/2 or 5 is the 1st digit after decimal
Last edited:
#### anemone
##### MHB POTW Director
Staff member
x is too large $2014^{2014^{2014}}$
so $x^2 + x + 1= (x + 1/2)^2 + 3/4$
= $(x+1/2)^2( 1+ 3/(4(x + 1/2)^2)$
so square root = $(x+1/2) ( 1 + 3/(8(x+1/2)^2) + ....)$
the term $3/(8(x+1/2)^2)$ is extremley small so << .1
so square root is x + 1/2 or 5 is the 1st digit after decimal
Hey kaliprasad, thanks for participating! Well done! Your answer is correct... but I think this edited version of the solution isn't quite straightforward than the before edited post.
x is too large $2014^{2014^{2014}}$
so $x^2+x+1=(x+1/2)^2+3/4$ and as 3/4 is too small we can igmore so | {} |
## ABC and cosmology
Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , on May 4, 2015 by xi'an
Two papers appeared on arXiv in the past two days with the similar theme of applying ABC-PMC [one version of which we developed with Mark Beaumont, Jean-Marie Cornuet, and Jean-Michel Marin in 2009] to cosmological problems. (As a further coincidence, I had just started refereeing yet another paper on ABC-PMC in another astronomy problem!) The first paper cosmoabc: Likelihood-free inference via Population Monte Carlo Approximate Bayesian Computation by Ishida et al. [“et al” including Ewan Cameron] proposes a Python ABC-PMC sampler with applications to galaxy clusters catalogues. The paper is primarily a description of the cosmoabc package, including code snapshots. Earlier occurrences of ABC in cosmology are found for instance in this earlier workshop, as well as in Cameron and Pettitt earlier paper. The package offers a way to evaluate the impact of a specific distance, with a 2D-graph demonstrating that the minimum [if not the range] of the simulated distances increases with the parameters getting away from the best parameter values.
“We emphasis [sic] that the choice of the distance function is a crucial step in the design of the ABC algorithm and the reader must check its properties carefully before any ABC implementation is attempted.” E.E.O. Ishida et al.
The second [by one day] paper Approximate Bayesian computation for forward modelling in cosmology by Akeret et al. also proposes a Python ABC-PMC sampler, abcpmc. With fairly similar explanations: maybe both samplers should be compared on a reference dataset. While I first thought the description of the algorithm was rather close to our version, including the choice of the empirical covariance matrix with the factor 2, it appears it is adapted from a tutorial in the Journal of Mathematical Psychology by Turner and van Zandt. One out of many tutorials and surveys on the ABC method, of which I was unaware, but which summarises the pre-2012 developments rather nicely. Except for missing Paul Fearnhead’s and Dennis Prangle’s semi-automatic Read Paper. In the abcpmc paper, the update of the covariance matrix is the one proposed by Sarah Filippi and co-authors, which includes an extra bias term for faraway particles.
“For complex data, it can be difficult or computationally expensive to calculate the distance ρ(x; y) using all the information available in x and y.” Akeret et al.
In both papers, the role of the distance is stressed as being quite important. However, the cosmoabc paper uses an L1 distance [see (2) therein] in a toy example without normalising between mean and variance, while the abcpmc paper suggests using a Mahalanobis distance that turns the d-dimensional problem into a comparison of one-dimensional projections.
## the latest Significance: Astrostats, black swans, and pregnant drivers [and zombies]
Posted in Books, Kids, pictures, Statistics, Travel, University life with tags , , , , , , , , , , on February 4, 2015 by xi'an
Reading Significance is always an enjoyable moment, when I can find time to skim through the articles (before my wife gets hold of it!). This time, I lost my copy between my office and home, and borrowed it from Tom Nichols at Warwick with four mornings to read it during breakfast. This December issue is definitely interesting, as it contains several introduction articles on astro- and cosmo-statistics! One thing I had not noticed before is how a large fraction of the papers is written by authors of books, giving a quick entry or interview about their book. For instance, I found out that Roberto Trotta had written a general public book called the Edge of the Sky (All You Need to Know About the All-There-Is) which exposes the fundamentals of cosmology through the 1000 most common words in the English Language.. So Universe is replaced with All-There-Is! I can understand and to some extent applaud the intention, but it nonetheless makes for a painful read, judging from the excerpt, when researcher and telescope are not part of the accepted vocabulary. Reading the corresponding article in Significance let me a bit bemused at the reason provided for the existence of a multiverse, i.e., of multiple replicas of our universe, all with different conditions: multiplying the universes makes our more likely, while it sounds almost impossible on its own! This sounds like a very frequentist argument… and I am not even certain it would convince a frequentist. The other articles in this special astrostatistics section were of a more statistical nature, from estimating the number of galaxies to the chances of a big asteroid impact. Even though I found the graphical representation of the meteorite impacts in the past century because of the impact drawing in the background. However, when I checked the link to Carlo Zapponi’s website, I found the picture was a still of a neat animation of meteorites falling since the first report.
## the intelligent-life lottery
Posted in Books, Kids with tags , , , , , , , on August 24, 2014 by xi'an
In a theme connected with one argument in Dawkins’ The God Delusion, The New York Time just published a piece on the 20th anniversary of the debate between Carl Sagan and Ernst Mayr about the likelihood of the apparition of intelligent life. While 20 years ago, there was very little evidence if any of the existence of Earth-like planets, the current estimate is about 40 billions… The argument against the high likelihood of other inhabited planets is that the appearance of life on Earth is an accumulation of unlikely events. This is where the paper goes off-road and into the ditch, in my opinion, as it makes the comparison of the emergence of intelligent (at the level of human) life to be “as likely as if a Powerball winner kept buying tickets and — round after round — hit a bigger jackpot each time”. The later having a very clearly defined probability of occurring. Since “the chance of winning the grand prize is about one in 175 million”. The paper does not tell where the assessment of this probability can be found for the emergence of human life and I very much doubt it can be justified. Given the myriad of different species found throughout the history of evolution on Earth, some of which evolved and many more which vanished, I indeed find it hard to believe that evolution towards higher intelligence is the result of a basically zero probability event. As to conceive that similar levels of intelligence do exist on other planets, it also seems more likely than not that life took on average the same span to appear and to evolve and thus that other inhabited planets are equally missing means to communicate across galaxies. Or that the signals they managed to send earlier than us have yet to reach us. Or Earth a long time after the last form of intelligent life will have vanished…
## modern cosmology as a refutation of theism
Posted in Books with tags , , , , , on June 23, 2014 by xi'an
While I thought the series run by The Stone on the philosophy [or lack thereof] of religions was over, it seems there are more entries. This week, I read with great pleasure the piece written by Tim Maudlin on the role played by recent results in (scientific) cosmology in refuting theist arguments.
“No one looking at the vast extent of the universe and the completely random location of homo sapiens within it (in both space and time) could seriously maintain that the whole thing was intentionally created for us.” T. Maudlin
What I particularly liked in his arguments is the role played by randomness, with an accumulation of evidence of the random nature and location of Earth and human beings, which and who appear more and more at the margins of the Universe rather than the main reason for its existence. And his clear rejection of the argument of fine-tuned cosmological constants as an argument in favour of the existence of a watchmaker. (Argument that was also deconstructed in Seber’s book.) And obviously his final paragraph that “Atheism is the default position in any scientific inquiry”. This may be the strongest entry in the whole series.
## MCMSki [day 2]
Posted in Mountains, pictures, Statistics, University life with tags , , , , , , , , , on January 8, 2014 by xi'an
I was still feeling poorly this morning with my brain in a kind of flu-induced haze so could not concentrate for a whole talk, which is a shame as I missed most of the contents of the astrostatistics session put together by David van Dyk… Especially the talk by Roberto Trotta I was definitely looking for. And the defence of nested sampling strategies for marginal likelihood approximations. Even though I spotted posterior distributions for WMAP and Plank data on the ΛCDM that reminded me of our own work in this area… Apologies thus to all speakers for dozing in and out, it was certainly not due to a lack of interest!
Sebastian Seehars mentioned emcee (for ensemble Monte Carlo), with a corresponding software nicknamed “the MCMC hammer”, and their own CosmoHammer software. I read the paper by Goodman and Ware (2010) this afternoon during the ski break (if not on a ski lift!). Actually, I do not understand why an MCMC should be affine invariant: a good adaptive MCMC sampler should anyway catch up the right scale of the target distribution. Other than that, the ensemble sampler reminds me very much of the pinball sampler we developed with Kerrie Mengersen (1995 Valencia meeting), where the target is the product of L targets,
$\pi(x_1)\cdots\pi(x_L)$
and a Gibbs-like sampler can be constructed, moving one component (with index k, say) of the L-sample at a time. (Just as in the pinball sampler.) Rather than avoiding all other components (as in the pinball sampler), Goodman and Ware draw a single other component at random (with index j, say) and make a proposal away from it:
$\eta=x_j(t) + \zeta \{x_k(t)-x_j(t)\}$
where ζ is a scale random variable with (log-) symmetry around 1. The authors claim improvement over a single track Metropolis algorithm, but it of course depends on the type of Metropolis algorithms that is chosen… Overall, I think the criticism of the pinball sampler also applies here: using a product of targets can only slow down the convergence. Further, the affine structure of the target support is not a given. Highly constrained settings should not cope well with linear transforms and non-linear reparameterisations would be more efficient….
## big bang/data/computers
Posted in Running, Statistics, University life with tags , , , , , , , , , on September 21, 2012 by xi'an
I missed this astrostatistics conference announcement (and the conference itself, obviously!), occurring next door… Actually, I would have had (wee) trouble getting there as I was (and am) mostly stuck at home with a bruised knee and a doctor ban on any exercise in the coming day, thanks to a bike fall last Monday! (One of my 1991 bike pedals broke as I was climbing a steep slope and I did not react fast enough… Just at the right time to ruin my training preparation of the Argentan half-marathon. Again.) Too bad because there was a lot of talks that were of interest to me!
## Kant, Platon, Bayes, & Le Monde…
Posted in Books, Statistics, University life with tags , , , , , , , , , , , , on July 2, 2012 by xi'an
In the weekend edition of Le Monde I bought when getting out of my plane back from Osaka, and ISBA 2012!, the science leaflet has a (weekly) tribune by a physicist called Marco Zito that discussed this time of the differences between frequentist and Bayesian confidence intervals. While it is nice to see this opposition debated in a general audience daily like Le Monde, I am not sure the tribune will bring enough light to help to the newcomer to reach an opinion about the difference! (The previous tribune considering Bayesian statistics was certainly more to my taste!)
Since I cannot find a link to the paper, let me sum up: the core of the tribune is to wonder what does 90% in 90% confidence interval mean? The Bayesian version sounds ridiculous since “there is a single true value of [the parameter] M and it is either in the interval or not” [my translation]. The physicist then goes into stating that the probability is in fact “subjective. It measures the degree of conviction of the scientists, given the data, for M to be in the interval. If those scientists were aware of another measure, they would use another interval” [my translation]. Darn… so many misrepresentations in so few words! First, as a Bayesian, I most often consider there is a true value for the parameter associated with a dataset but I still use a prior and a posterior that are not point masses, without being incoherent, simply because the posterior only summarizes what I know about the parameter, but is obviously not a property of the true parameter. Second, the fact that the interval changes with the measure has nothing to do with being Bayesians. A frequentist would also change her/his interval with other measures…Third, the Bayesian “confidence” interval is but a tiny (and reductive) part of the inference one can draw from the posterior distribution.
From this delicate start, things do not improve in the tribune: the frequentist approach is objective and not contested by Marco Zito, as it sounds eminently logical. Kant is associated with Bayes and Platon with the frequentist approach, “religious wars” are mentioned about both perspectives debating endlessly about the validity of their interpretation (is this truly the case? In the few cosmology papers I modestly contributed to, referees’ reports never objected to the Bayesian approach…) The conclusion makes one wonders what is the overall point of this tribune: superficial philosophy (“the debate keeps going on and this makes sense since it deals with the very nature of research: can we know and speak of the world per se or is it forever hidden to us? (…) This is why doubt and even distrust apply about every scientific result and also in other settings.”) or criticism of statistics (“science (or art) of interpreting results from an experiment”)? (And to preamp a foreseeable question: no, I am not writing to the journal this time!) | {} |
# Neural Networks and Complex Valued Inputs
[not sure if this or stats.stackexchange was the correct location for this post, so put it on both for now.]
I've seen some recent papers describing complex valued neural networks like this one: Deep Complex Networks, 2017, Trabelsi et al.. What I'm wondering is, rather than invent a novel complex network pipeline that takes a complex value input as a single channel, why not just separate the real and imaginary components into two channels fed into a regular neural network, and then let the network figure out the relations, without necessarily knowing that one channel represents the real component while the other represents the imaginary component?
I assume there must be some disadvantage to doing it this way, or some relation that the neural network can't pick up on, so if that's the case would someone please provide me with a high-level explanation of why this two-channel standard network approach is inferior to the novel single-channel complex network?
(By the way, the application I have in mind for researching deep complex networks is RF signal classification.)
• Hi: It's probably easy enough to check if your idea results in the same result from the NN. If not, then it's probably due to the backpropagation algorithm being somehow dependent on the correlation of the real part and imaginary part. This isn't an answer but the backprop algoritm takes partials so it may matter if you send them in separately. I'm very very fuzzy on backprop and this is not a direct answer to your question, but, my point is that, if the results are different, then the backprop algorithm nuances are almost surely the reason why. – mark leeds Apr 25 '18 at 19:03
• I plan to try this out as soon as I get the code working that goes along with the paper I linked. I'm looking to get an intuitive enough understanding that I can explain it to others. Thanks though, that gives me at least some idea of what the issue is. – Austin Apr 25 '18 at 19:07
• Hi: Me again. I just read only the abstract but convolutional networks-deep learning is somewhat different from straight basic NN's so my answer may not be applicable to your question. I have zero knowledge of deep learning aside from regular NN's so can't provide any insight other than above. Apologies for any confusion. – mark leeds Apr 25 '18 at 19:08
The power of complex representations remains an open topic to me. I still do strive the understand Fourier transformations.
An underlying question is, to me: why would complex transformations be useful for real data? More generally, when data dwell in a set $S$, is $S$ the most appropriate set of analysis, or is it more appropriate to resort to a bigger set $S^*$? For instance, for real-valued polynomials, we know that the field of complex numbers provides a more elegant extension. This might not be so distant, as $z$-transforms (extended polynomials) are tools of choice for real time-series analysis. Real linear-time invariant systems are root signals (here eigenvectors) that are complex (cisoids). To better separate frequencies, time-frequency analyses often employ analytic signals and the Hilbert transform, or analytic, dual-tree multiscale tools like wavelets. The recent paper Complex-Valued Signal Processing: The Proper Way to Deal With Impropriety deals with more stochastic observations:
Complex-valued signals occur in many areas of science and engineering and are thus of fundamental interest. In the past, it has often been assumed, usually implicitly, that complex random signals are proper or circular. A proper complex random variable is uncorrelated with its complex conjugate, and a circular complex random variable has a probability distribution that is invariant under rotation in the complex plane. While these assumptions are convenient because they simplify computations, there are many cases where proper and circular random signals are very poor models of the underlying physics. When taking impropriety and noncircularity into account, the right type of processing can provide significant performance gains. There are two key ingredients in the statistical signal processing of complex-valued data: 1) utilizing the complete statistical characterization of complex-valued random signals; and 2) the optimization of real-valued cost functions with respect to complex parameters. In this overview article, we review the necessary tools, among which are widely linear transformations, augmented statistical descriptions, and Wirtinger calculus. We also present some selected recent developments in the field of complex-valued signal processing, addressing the topics of model selection, filtering, and source separation.
But going back in time (Oppenheim's works for instance), one knows that complex phase can capture non-stationarity, discontinuity (edges and jumps) and oscillatory behaviors (textures). My present belief is that, at a given scale, a complex feature computed as a whole is more efficient at capturing local behavior that a pair of real and imaginary parts computed somewhat separately, incorporating some invariances with respect to translation or rotation.
As for neural networks, and of course deep learning, the recent theory of scattering networks, and subsequent works, have provided a solid ground for understanding how deep learning works, from a solid mathematical point of view, based on complex wavelet frames and non-linear operators. Another interesting paper for your question is:
M. Tygert et al., 2016, A Mathematical Motivation for Complex-Valued Convolutional Networks, Neural Computation
Courtesy of the exact correspondence, the remarkably rich and rigorous body of mathematical analysis for wavelets applies directly to (complex-valued) convnets.
Convolutional networks (convnets) have become increasingly important to artificial intelligence in recent years, as reviewed by LeCun, Bengio, and Hinton (2015).This note presents a theoretical argument for complex-valued convnets and their remarkable performance. Complex-valued convnets turn out to calculate data-driven multiscale windowed spectra characterizing certain stochastic processescommonin the modeling of time series (such as audio) and natural images (including patterns and textures).We motivate the construction of such multiscale spectra using local averages of multiwavelet absolute values or, more generally, nonlinear multiwavelet packets.
And maybe, complex numbers are not enough, and we should turn to quaternions...
• This is very interesting thanks for the paper links! – Austin Apr 26 '18 at 0:51
• And I believe this is an ongoing trend: Complex is good for reals :) – Laurent Duval May 1 '18 at 14:17
Training is essentially an optimization and assuming you will have complex weights $z$, and a real valued objective, $$\min_{z} f(z) \quad f(z) \quad \text{Real}$$ because of the Cauchy Riemann Condition, $f(z)$ is not analytic (essentially doesn't have a Taylor Series in $z$) so this is handled as: $$\min_{z,z^*} f(z,z^*)$$ where $z$ and $z^*$ are considered independent, which seems a bit strange at first but you can linearly (and invertibly) transform from $z,z^*$ to $\text{Real}(z), \text{Imag}(z)$ and have an equivalent optimization $$\min_z g( \text{Real}(z), \text{Imag}(z))$$
A good article is
@article{RN18,
author = {Sorber, Laurent and Barel, Marc Van and Lathauwer, Lieven De},
title = {Unconstrained optimization of real functions in complex variables},
journal = {SIAM Journal on Optimization},
volume = {22},
number = {3},
pages = {879-898},
ISSN = {1052-6234},
year = {2012},
type = {Journal Article}
}
which has a reference to Bramwood, which is the way it is typically introduced in array processing.
Going Complex or in terms of real and imaginary are equivlent , given that you understand the optimization in terms of complex variables is necessarily in term of $z,z^*$
• I'll have to look up the math references as they are a bit above my head for now, but to understand your conclusion are you saying there is not necessarily a performance degradation from treating the real and imaginary components as separate channel real values in a standard neural network? Or did I miss your point completely? – Austin Apr 25 '18 at 20:19
• Just to double check that my original question was clear, I meant in the one scenario to have everything be real-valued, that is take the real component as one real channel, the imaginary component as another real channel, real valued weights, and a real valued activation function. So like instead of inputting [a+bi], just input [a,b] separately without explicitly telling the network that the second channel is imaginary. – Austin Apr 25 '18 at 20:23
• I'm just saying you can get to the same place, 2 different ways. Didn't say that getting there takes the same time, but aside from the time to make sure and test your training algorithm for the complex case, there should be about the same number of math operations, so about the same, but given the amount of time it took C to have intrinsic Complex types, I wouldn't just assume the same level of optimization. The SIAM paper favours the $z,z^*$ approach. If you are more interested in the network, the real,imaginary approaches is going to be quicker and safer. Programing wise – Stanley Pawlukiewicz Apr 25 '18 at 20:33
• Thanks. I'm more asking from the standpoint of accuracy rather than compute time. I was wondering if there was any reason to believe that the complex case directly would achieve higher classification accuracy for complex data than splitting it into two channels. I heard some talk about neural nets not being able to understand the relation between real and imaginary components when fed in seperately because of the cyclical patterns of the data. – Austin Apr 25 '18 at 20:35
• When it comes to non Convex optimization, your guess is as good as mine. The paper isn't that hard with Wikipedia as a backup for some clarifications. You should let it make the case, either way. – Stanley Pawlukiewicz Apr 25 '18 at 20:41 | {} |
Browse Questions
# Explain : Ionic solids are hard and brittle
Can you answer this question?
Ionic crystals are hard due to strong electrostatic forces between them.
They are brittle because ionic bond is non-directional.
answered Jul 30, 2014 | {} |
# Problem algebra involving third roots
1. Sep 2, 2012
### ParisSpart
1. The problem statement, all variables and given/known data
Let x = third root of [root (108) + 10] - trird root of [root (108) - 10]. Show that x ^ 3 +6 x-20 = 0 from which to infer the value of x (is a small natural)
3. The attempt at a solution
may can i have some ideas how to find this? i tryed to find the x but i dont know how with the roots..
2. Sep 2, 2012
### SammyS
Staff Emeritus
Re: Problem algebra
Hello ParisSpart. Welcome to PF ! (Yes, I see that you have started one other thread, some time ago.)
What specifically have you tried?
Where are you stuck?
To show that $\sqrt[3]{\sqrt{108}+10\ }-\sqrt[3]{\sqrt{108}-10\ }$ is a solution to x3 + 6x - 20 = 0, plug $\sqrt[3]{\sqrt{108}+10\ }-\sqrt[3]{\sqrt{108}-10\ }$ in for x in your equation.
Of course, you will need to cube $\sqrt[3]{\sqrt{108}+10\ }-\sqrt[3]{\sqrt{108}-10\ }\ .$
(a - b)3 = a3 - 3(a2)b + 3a b2 - b3 .
3. Sep 2, 2012
### SammyS
Staff Emeritus
This is one of those nasty looking problems, but it can be shown that $\sqrt[3]{\sqrt{108}\pm 10\ }=\sqrt{3}\pm 1$.
Also it may be helpful to express, $(a-b)^3$ as
$a^3-b^3-3ab(a-b)\ .$
4. Sep 3, 2012
### HallsofIvy
Staff Emeritus
Do you know the "cubic formula"? If a and b are any two numbers, then
$$(a- b)^3= a^3- 3a^2b+ 3ab^2- b^3$$
and
$$3ab(a- b)= 3a^2b- 3ab^2$$
so that $(a- b)^3+ 3ab(a- b)= 0$. That is, if we let x= a- b, m= 3ab, and $n= a^3- b^3$, then $x^3+ mx= n$.
And, we can do this "the other way around"- knowing m and n, solve for a and b and so solve the (reduced) cubic equation $x^3+ mx= n$. From m= 3ab, b= m/(3a) and then $n= a^3- m^3/(3^3a^3)$. Multiplying by $a^3$ gives $na^3= (a^3)^2- (m/3)^3]$ which is a quadratic in $a^3$, $(a^3)^2- na^3- (m/3)^3= 0$, which can be solved by the quadratic equation:
$$a^3= \frac{n\pm\sqrt{n^2+ 4(m/3)^3}}{2}= \frac{n}{2}\pm\sqrt{(n/2)^2+ (m3)^3}$$
Since $a^3- b^= n$,
$$b^3= a^3- n= \frac{n}{2}\mp\sqrt{(n/2)^2+ (m/3)^3}$$
and x is the difference of cube roots of those.
The point is that, in this problem, the numbers are given in exactly that form! We can work out that n/2= 10 so n= 20, and that $(n/2)^2- (m/3)^3= 100- (m/3)^3= 108$ so that (m/3)^3= -8, m/3= -2, and m= -6. | {} |
# zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Group decision making and consensus under fuzzy preferences and fuzzy majority. (English) Zbl 0768.90003
The paper develops fuzzy set-based models for fundamental relations of strict preference, indifference, and incomparability. This generalization is aimed at preserving all classical properties found in preference modelling. Recall that in this theory the above binary relations are defined in a given family $A$ of alternatives as follows: Strict preference: $aPb$ iff $aRb$ and not $bRa$; Indifference: $aIb$ iff $aRb$ and $bRa$; Incomparability: $aJb$ iff not $aRb$ and not $bRa$, where $R$ denotes a binary relation of weak preference, say $aRb$ iff $a$ is at least as good as $b$. The main results pertain to an extension of the classical results by proposing fuzzy models for the above relations. It is proved that a “reasonable” generalization (preserving the properties found in the Boolean case) should be based upon Lukasiewicz-like De Morgan triples.
##### MSC:
91B08 Individual preferences 03E72 Fuzzy set theory
Full Text:
##### References:
[1] Aizerman, M. A.: New problems in the general choice theory. Soc. choice welf. 2, 235-282 (1985) · Zbl 0583.90004 [2] Blin, J. M.; Whinston, A. P.: Fuzzy sets and social choice. J. cybernet. 4, 17-22 (1973) · Zbl 0303.90009 [3] Fedrizzi, M.: Group decisions and consensus: A model using fuzzy sets theory (in italian). Rivista scienze econ. Soc. A 9, No. 1, 12-20 (1986) [4] Fedrizzi, M.; Kacprzyk, J.: On measuring consensus in the setting of fuzzy preference relations. Non-conventional preference relations in decision making, 129-141 (1988) · Zbl 0652.90004 [5] Fedrizzi, M.; Kacprzyk, J.; Zadron\dot{}, S.: An interactive multi-user decision support system for consensus reaching processes using fuzzy logic with linguistic quantifiers. Decision support systems 4, 313-327 (1988) [6] Kacprzyk, J.: Collective decision making with a fuzzy majority rule. Proc. WOGSC congress, 153-159 (1984) [7] Kacprzyk, J.: Zadeh’s commonsense knowledge and its use in multicriteria, multistage and multiperson decision making. Approximate reasoning in expert systems, 105-121 (1985) [8] Kacprzyk, J.: Some ’commonsense’ solution concepts in group decision making via fuzzy linguistic quantifiers. Management decision support systems using fuzzy sets and possibility theory, 125-135 (1985) [9] Kacprzyk, J.: Group decision-making with a fuzzy majority via linguistic quantifiers, part I: A consensory-like pooling; part II: A competitive-like pooling. Cybernet. and systems 16, 131-144 (1985) · Zbl 0602.90006 [10] Kacprzyk, J.: Group decision making with a fuzzy linguistic majority. Fuzzy sets and systems 18, 105-118 (1986) · Zbl 0604.90012 [11] Kacprzyk, J.: Towards an algorithmic/procedural ’human consistency’ of decision support systems: A fuzzy logic approach. Applications of fuzzy sets in human factors, 101-116 (1986) [12] Kacprzyk, J.: On some fuzzy cores and ’soft’ consensus measures in group decision making. The analysis of fuzzy information 2, 119-130 (1987) · Zbl 0652.90011 [13] Kacprzyk, J.: Towards ’human consistent’ decision support systems through commonsense-knowledge-based decision making and control models: A fuzzy logic approach. Comput. artificial intelligence 6, 97-122 (1987) · Zbl 0631.68076 [14] Kacprzyk, J.; Fedrizzi, M.: ’Soft’ consensus measures for monitoring real consensus reaching processes under fuzzy preferences. Control cybernet. 15, 309-323 (1986) · Zbl 0636.90001 [15] Kacprzyk, J.; Fedrizzi, M.: A ’soft’ measure of consensus in the setting of partial (fuzzy) preferences. European J. Oper. res. 34, 315-325 (1988) · Zbl 0652.90004 [16] Kacprzyk, J.; Fedrizzi, M.: A ’human-consistent’ degree of consensus based on fuzzy logic with linguistic quantifiers. Math. social sci. 18, 275-290 (1989) · Zbl 0685.90008 [17] Kacprzyk, J.; Fedrizzi, M.: Multiperson decision making models using fuzzy sets and possibility theory. (1990) · Zbl 0724.00034 [18] Kacprzyk, J.; Fedrizzi, M.; Nurmi, H.: Group decision making with fuzzy majorities represented by linguistic quantifiers. Approximate reasoning tools for artificial intelligence, 126-145 (1990) [19] Kacprzyk, J.; Fedrizzi, M.; Nurmi, H.: Fuzzy logic with linguistic quantifiers in group decision making and consensus formation. An introduction to fuzzy logic applications in intelligent systems, 263-280 (1992) [20] Kacprzyk, J.; Nurmi, H.: Linguistic quantifiers and fuzzy majorities for more realistic and human-consistent group decision making. Fuzzy methodologies for industrial and systems engineering, 267-281 (1989) [21] Kacprzyk, J.; Roubens, M.: Non-conventional preference relations in decision making. (1988) · Zbl 0642.00025 [22] Kacprzyk, J.; Yager, R. R.: Linguistic quantifiers and belief qualification in fuzzy multicriteria and multistage decision making. Control cybernet. 13, 155-173 (1984) · Zbl 0551.90091 [23] Kacprzyk, J.; Yager, R. R.: ’Softer’ optimization and control models via fuzzy linguistic quantifiers. Inform. sci. 34, 157-178 (1984) · Zbl 0562.90098 [24] Kacprzyk, J.; Yager, R. R.: Using fuzzy logic with linguistic quantifiers in multiobjective decision making and optimization: A step towards more human-consistent models. Stochastic versus fuzzy approaches in multiobjective mathematical programming under uncertainty, 331-350 (1990) · Zbl 0728.90051 [25] Kacprzyk, J.; Zadron\dot{}ny, S.; Fedrizzi, M.: An interactive user-friendly decision support system for consensus reaching based on fuzzy logic with linguistic quantifiers. Fuzzy computing, 307-322 (1988) [26] Kahneman, D.; Tversky, A.: Prospect theory: an analysis of decision under risk. Econometrica 47, 263-291 (1979) · Zbl 0411.90012 [27] Loewer, B.; Laddaga, R.: Destroying the consensus. Synthese 62, 79-96 (1985) [28] Machina, M.: Expected utility analysis without the independence axiom. Econometrica 50, 277-323 (1982) · Zbl 0475.90015 [29] Nurmi, H.: Approaches to collective decision making with fuzzy preference relations. Fuzzy sets and systems 6, 249-259 (1981) · Zbl 0465.90006 [30] Nurmi, H.: Imprecise notions in individual and group decision theory: resolution of allais’ paradox and related problems. Stochastica 6, 283-303 (1982) · Zbl 0519.90006 [31] Nurmi, H.: Voting procedures: A summary analysis. British J. Political sci. 13, 181-208 (1983) [32] Nurmi, H.: Comparing voting systems. (1987) [33] Nurmi, H.: Assumptions on individual preferences in the theory of voting procedures. Non-conventional preference relations in decision making, 142-155 (1988) [34] Nurmi, H.; Fedrizzi, M.; Kacprzyk, J.: Vague notion in the theory of voting. Multiperson decision making models using fuzzy sets and possibility theory, 43-52 (1990) [35] Nurmi, H.; Kacprzyk, J.: On fuzzy tournaments and their solution concepts in group decision making. European J. Oper. res. 51, 223-232 (1991) · Zbl 0742.90009 [36] Ponsard, C.: An application of fuzzy subsets theory to the analysis of the consumer’s spatial preferences. Fuzzy sets and systems 5, 235-244 (1981) · Zbl 0454.90004 [37] Ponsard, C.: L’équilibre du consomateur dans un context imprécis. Sistemi urbani 3, 107-133 (1981) [38] Ponsard, C.: Producer’s spatial equilibria with a fuzzy constraint. European J. Oper. res. 10, 302-313 (1982) · Zbl 0483.90017 [39] Ponsard, C.: Partial spatial equilibria with fuzzy constraints. J. regional sci. 22, 159-175 (1982) · Zbl 0483.90017 [40] Ponsard, C.: Fuzzy sets in economics: foundations of soft decision theory. Management decision support systems using fuzzy sets and possibility theory, 25-37 (1985) [41] Ponsard, C.: Fuzzy mathematical models in economics. Fuzzy sets and systems 28, 273-283 (1988) · Zbl 0657.90017 [42] Tversky, A.; Kahneman, D.: Rational choice and the framing of decisions. J. of business 59, S251-S278 (1986) · Zbl 1225.91017 [43] Yager, R. R.: Quantifiers in the formulation of multiple objective decision functions. Inform. sci. 31, 107-139 (1983) · Zbl 0551.90084 [44] Yager, R. R.: Quantified propositions in linguistic logic. Internat. J. Man-machine stud. 19, 195-227 (1983) · Zbl 0522.03013 [45] Yager, R. R.: Aggregating evidence using quantified statements. Inform. sci. 36, 179-206 (1985) · Zbl 0584.94030 [46] Zadeh, L. A.: A computational approach to fuzzy quantifiers in natural languages. Comput. math. Appl. 9, 149-184 (1983) · Zbl 0517.94028 [47] Zadeh, L. A.: Syllogistic reasoning in fuzzy logic and its application to usuality and reasoning with dispositions. IEEE trans. Systems man cybernet. 175, 754-763 (1985) · Zbl 0593.03033 | {} |
## An entire online text for Precalc from the Univeristy of Houston.
http://online.math.uh.edu/Math1330/index.html
It's the best free source I've seen. It's the project of grad students and is a comprehensive online text that has everything you'd expect from a textbook. And on top of that, there are streaming lectures.
PhysOrg.com science news on PhysOrg.com >> Leading 3-D printer firms to merge in $403M deal (Update)>> LA to give every student an iPad;$30M order>> CIA faulted for choosing Amazon over IBM on cloud contract
University of Washington has a free precalc book too.
Very short Trig course: http://www.clarku.edu/~djoyce/trig/
Similar discussions for: An entire online text for Precalc from the Univeristy of Houston. Thread Forum Replies Science Textbook Discussion 7 Academic Guidance 9 Academic Guidance 3 Computing & Technology 8 Special & General Relativity 2 | {} |
# If H is Hermitian, show that $e^{iH}$ is unitary
The only approaches I have seen to answering this question have involvd manipulating $e^{iH}$ like an ordinary exponential. i.e.
If $U=e^{iH}$, then $U^\dagger = (e^{iH})^\dagger = e^{-iH^\dagger}=e^{-iH}$
Therefore $UU^\dagger = e^{iH}e^{-iH}= e^0 = 1$
I have written the proof out above as I have seen it, although I presume it woul be more correct in the last line to say $e^\textbf{0}=I$ where $\textbf{0}$ is the zero matrix.
I am not sure to what extent this constitutes a valid proof. It seems to me that we are assuming many things about exponentials raised to matrices, which are not immediately obvious to me.
Since $e^A$ for some matrix $A$ is defined through the power series $\Sigma_{n=0}^{\infty} \frac{A^n}{n!}$, I am more inclined to proving this through the power series approach. Using this, I get (omitting a few trivial steps)
$UU^\dagger = e^{iH}e^{-iH}= \Sigma_{n=0}^{\infty} \Sigma_{m=0}^{\infty} \frac{(iH)^n}{n!} \frac{(-iH)^m}{m!} = \frac{1}{2} \Sigma_{n=0}^{\infty} \Sigma_{m=0}^n \frac{(iH)^{n-m}}{n!m!}$
Although I am not sure that the final step is correct (I have seen this replacement for a double summation in the context of electrostatics, but then the term for $n=m$ was zero, which it is not here. So I am not quite sure how to deal with that.
My overall questions here are as follows:
a) Is the proof using only exponentials valid? Specifically, the step that assumes $e^{iH}e^{-iH}= e^\textbf{0}$? (The rest can be gotten easily from the power series expansion).
b) Is the last step of the summation in my incomplete proof correct? I realise this question isn't entirely related to the original question, so I am happy to ask it on a new post.
c) Whether or not the last line is correct, I am stuck with the summations and don't know how to show that they all cancel expect for a single $I$ term... One thing I had in mind was considering the summation as a contraction between an antisymmetric tensor- the $(iH)^{n-m}$- and a symmetric tensor $1/{n!m!}$, although I am not sure about this at all. I don't think the first is antisymmetric...
Instead of answering your questions directly, let me just add some comments on the proof in general that might be useful. The statement you want to prove hinges on two facts:
1. $e^{A} e^{B} = e^{A+B}$ when $A$ and $B$ commute
2. $(e^A)^\dagger = e^{A^\dagger}$
Depending on how careful you want to write your proof you might (or might not) want to include the proof of these two facts.
The first property might seem obvious, but it doesn't hold when $A$ and $B$ doesn't commute. This property can be proved using the power-series defintion of the matrix exponential, the binomial theorem for $(A+B)^n$ and the Cauchy formula for the product of two power-series. A second way is to define $f(t) = e^{(A+B)t} - e^{At}e^{Bt}$ and show that $f'(t) = 0$ and $f(0) = 0$. And I'm sure there are other ways.
The second property follows directly from the power-series definition of the matrix expoential and the fact that $(A^n)^\dagger = (A^\dagger)^n$ which can be proven by first showing it for $n=2$ and then using induction.
Here is one example of a way to proving the first property in some detail. To do so we introduce a parameter $t$ to help us with book-keeping (we consider it as a power-series in $t$)
$$\matrix{e^{tA}e^{tB} &=& \sum_{n,m=0}^\infty \frac{A^n}{n!}\frac{B^m}{m!} t^{n+m} & \text{(definition of matrix exp.)}\\&=& \sum_{p=0}^\infty \frac{t^p}{p!}\left(\sum_{k=0}^p{p\choose k}A^k B^{p-k}\right) & \text{(rewrite as standard power-series)}\\ &=& \sum_{p=0}^\infty \frac{t^p}{p!}(A+B)^p & \text{(binomial theorem)}\\ &=& e^{t(A+B)} & \text{(definition of matrix exp.)}}$$
Note that in the binomial theorem only holds if $A$ and $B$ commute (for example $(A+B)^2 = A^2 + AB + BA + B^2$ is seen to equal $A^2 + 2AB + B^2$ only when $AB = BA$).
In the case where H is acting on a finite dimensional vector space, you can essentially view it as a matrix, in which case (by for example the BCH formula) the relation you state in a) is valid. More generally if $[A,B]=0$ then the product of exponentials is just the exponential of the sum.
There may be subtleties in the more general case, but I doubt you'd even be interested in those.
As for b) and also c), note that $(-x)^m = (-1)^m x^m$. I can provide further hints if needed.
Edit: I just saw that you mention matrices, so never mind the more general case (I was referring to operators on infinite dimensional vector spaces, because the notation triggered my brain to think this was necessarily about quantum mechanics) | {} |
Try to answer the following questions here:
• What is this question used for?
• What does this question assess?
• What does the student have to do?
• How is the question randomised?
• Are there any implementation details that editors should be aware of?
### History and Feedback
#### Checkpoint description
Describe what's changed since the last checkpoint.
Published this.
#### Christian Lawson-Perfect2 years, 4 months ago
Gave some feedback: Ready to use
#### Christian Lawson-Perfect2 years, 4 months ago
Saved a checkpoint:
I'm not sure why part b had two equations in it. I've split it into two parts.
A couple of uses of "question" instead of "equation" in the advice.
The quadratic formula was missing $x=$ a few times in the advice!
Otherwise, this is good.
#### Hannah Aldous2 years, 4 months ago
Gave some feedback: Needs to be tested
#### Chris Graham2 years, 4 months ago
Nearly there!
In the steps you'll need to give the form of the quadratic equation associated with the solution, to assist the student in identifying $a$, $b$ and $c$.
#### Chris Graham2 years, 4 months ago
Gave some feedback: Has some problems
#### Hannah Aldous2 years, 4 months ago
Gave some feedback: Needs to be tested
#### Chris Graham2 years, 4 months ago
I have changed the wording of the statement slightly and also removed "require trailing zeros" in part (a), which was too harsh.
I've removed (i) from part (a) as there is only one sub-part.
Otherwise looks good. However, if this is the first time we meet the quadratic formula (I'm guessing from the way the statement is worded) then I would like to have it available to the student, perhaps as a step to part (a)?
#### Chris Graham2 years, 4 months ago
Gave some feedback: Has some problems
#### Hannah Aldous2 years, 4 months ago
Gave some feedback: Needs to be tested
#### Vicky Hall2 years, 4 months ago
Gave some feedback: Has some problems
#### Vicky Hall2 years, 4 months ago
I would give the quadractic formula in the statement and have $x=$ in front of it. I would amend the statement so that it says it can also be useful to use the formula if equations are difficult to factorise (perhaps if coefficients are large), as the equations in part a) and part b)i) could both be solved by factorising instead but using the formula is (probably) quicker.
The expected answers in part a) are the numbers that would appear in the factorised equation and not the roots so you need to negate the answers and then swap their gaps around.
#### Hannah Aldous2 years, 4 months ago
Gave some feedback: Needs to be tested
#### Christian Lawson-Perfect2 years, 4 months ago
Gave some feedback: Has some problems
#### Christian Lawson-Perfect2 years, 4 months ago
The non-zero right-hand sides in part a are gotchas: I'd like to have a nice part first, where the right-hand side is zero.
Could you make the coefficients in part b work so that you get integers out? You want to see if the student can work out how to deal with getting an algebraic answer, and rounding errors would be a distraction.
In part b it looks like the roots are the wrong way round - the lowest root is second.
You could split this into two questions: "use the quadratic formula to solve an equation with non-zero RHS", and "use the quadratic formula to solve an equation in terms of another variable".
#### Hannah Aldous2 years, 4 months ago
Gave some feedback: Needs to be tested
#### Lauren Richards2 years, 4 months ago
• Your i) and ii) need to be not bold and in italics in the parts.
• The quadratic formula equation in the start of the advice is slightly incorrect - it should be 2a on the denominator, not 2.
• Something is not quite right for a)ii) in the advice as it is written as it would be when writing it and also midway through the advice for part b).
#### Lauren Richards2 years, 4 months ago
Gave some feedback: Has some problems
#### Hannah Aldous2 years, 4 months ago
Gave some feedback: Needs to be tested
#### Hannah Aldous2 years, 5 months ago
Created this.
Using the Quadratic Formula to Solve Equations of the Form $ax^2 +bx+c=0$ Ready to use Hannah Aldous 01/08/2017 14:06
Use the quadratic formula to solve an equation in terms of an unknown variable Ready to use Hannah Aldous 01/08/2017 13:56
Katherine's copy of Using the Quadratic Formula to Solve Equations of the Form $ax^2 +bx+c=0$ draft Katherine Tomlinson 02/08/2017 12:45
Emma's copy of Using the Quadratic Formula to Solve Equations of the Form $ax^2 +bx+c=0$ draft Emma Cliffe 31/08/2017 12:59
Peter's copy of Using the Quadratic Formula to Solve Equations of the Form $ax^2 +bx+c=0$ draft Peter Knowles 12/09/2017 12:39
Using the Quadratic Formula to Solve Equations of the Form $ax^2 +bx+c=0$ [L4 Randomised] Needs to be tested Matthew James Sykes 25/07/2018 11:06
Simon's copy of Using the Quadratic Formula to Solve Equations of the Form $ax^2 +bx+c=0$ draft Simon Thomas 07/03/2019 09:55
Simon's copy of Use the quadratic formula to solve an equation in terms of an unknown variable draft Simon Thomas 07/03/2019 10:00
Use the quadratic formula to solve an equation in terms of an unknown variable draft Xiaodan Leng 11/07/2019 01:50
Solve Equations of the Form $ax^2 +bx+c=0$ draft Thomas Waters 11/09/2019 22:18
Michael's copy of Solve Equations of the Form $ax^2 +bx+c=0$ draft Michael Proudman 09/10/2019 08:25
Blathnaid's copy of Use the quadratic formula to solve an equation in terms of an unknown variable draft Blathnaid Sheridan 21/10/2019 12:19
Blathnaid's copy of Use the quadratic formula to solve an equation in terms of an unknown variable draft Blathnaid Sheridan 21/10/2019 12:33
Give any introductory information the student needs.
No variables have been defined in this question.
(a number)
Numbers between and (inclusive) with step size
A random number between and (inclusive) with step size
(text string)
(numbers)
(text strings)
This variable is an HTML node. HTML nodes can not be relied upon to work correctly when resuming a session - for example, attached event callbacks will be lost, and mathematical notation will likely also break.
If this causes problems, try to create HTML nodes where you use them in content areas, instead of storing them in variables.
Describe what this variable represents, and list any assumptions made about its value.
This variable doesn't seem to be used anywhere.
Name Type Generated Value
#### Error in variable testing condition
There's an error in the condition you specified in the Variable testing tab. Variable values can't be generated until it's fixed.
Error:
for seconds
Running for ...
No parts have been defined in this question.
Select a part to edit.
The correct answer is an equation. Use the accuracy tab to generate variable values satisfying this equation so it can be marked accurately.
#### Checking accuracy
Define the range of points over which the student's answer will be compared with the correct answer, and the method used to compare them.
#### Variable value generators
Give expressions which produce values for each of the variables in the expected answer. Leave blank to pick a random value from the range defined above, following the inferred type of the variable.
#### String restrictions
Both choices and answers must be defined for this part.
Help with this part type
##### Base marking algorithm
#### Test that the marking algorithm works
Check that the marking algorithm works with different sets of variables and student answers using the interface below.
Create unit tests to save expected results and to document how the algorithm should work.
There's an error which means the marking algorithm can't run:
Name Value
Note
Value Feedback
Click on a note's name to show or hide it. Only shown notes will be included when you create a unit test.
#### Unit tests
No unit tests have been defined. Enter an answer above, select one or more notes, and click the "Create a unit test" button.
The following tests check that the question is behaving as desired.
### This test has not been run yet This test produces the expected output This test does not produce the expected output
This test is not currently producing the expected result. Fix the marking algorithm to produce the expected results detailed below or, if this test is out of date, update the test to accept the current values.
One or more notes in this test are no longer defined. If these notes are no longer needed, you should delete this test.
Name Value
This note produces the expected output
This test has not yet been run.
When you need to change the way this part works beyond the available options, you can write JavaScript code to be executed at the times described below.
Run this script the built-in script.
This script runs after the built-in script.
To account for errors made by the student in earlier calculations, replace question variables with answers to earlier parts.
In order to create a variable replacement, you must define at least one variable and one other part.
The variable replacements you've chosen will cause the following variables to be regenerated each time the student submits an answer to this part:
These variables have some random elements, which means they're not guaranteed to have the same value each time the student submits an answer. You should define new variables to store the random elements, so that they remain the same each time this part is marked.
### Parts
Give a worked solution to the whole question.
### Preamble
This script will run after the question's variable have been generated but before the HTML is attached.
Apply styling rules to the question's content.
This question is used in the following exams: | {} |
# Generic binary search tree in C++
## Introduction
This is yet another data structure I'm going over again for the algorithms course. This time it is binary search tree.
Implemented operations:
• Insert
• Exists
• Remove
• Clear
• Move constructor and assignment, destructor
There are some tests for remove function below the data structure itself. I tested other functions, but the tests got overridden in the interim. They did pass though. I believe automated ones are not possible until I make the tree somehow traversable, but that's adventure for another time.
## Concerns
• Extreme code duplication
The 4 case functions (nullptr, equal, less, greater) share the same control flow statements. I believe abstracting that away would make it worse though.
• Extremely complicated remove function
This one took around an hour of my time to get correct, from starting to write it to debugging the three cases I found and tested for.
It just gives that feeling. Or may be most of the algorithms I've seen are much more elegant than this.
## Code
#include <stdexcept>
#include <type_traits>
#include <ostream>
#include <utility>
template <typename ValueType>
class binary_search_tree
{
struct node
{
const ValueType value;
node* left;
node* right;
};
enum class direction
{
is_root,
left,
right
};
struct search_result
{
node* parent;
node* target_child;
direction parent_to_child;
};
node* root;
public:
binary_search_tree() :
root(nullptr)
{}
binary_search_tree(const binary_search_tree& other) = delete;
binary_search_tree& operator=(const binary_search_tree& other) = delete;
binary_search_tree(binary_search_tree&& other) :
root(std::exchange(other.root, nullptr))
{}
binary_search_tree& operator=(binary_search_tree&& other) noexcept
{
std::swap(root, other.root);
return *this;
}
bool try_insert(const ValueType& value)
{
return try_insert_helper(value, root);
}
bool exists(const ValueType& value)
{
return find_node(value, nullptr, root, direction::is_root).target_child != nullptr;
}
bool delete_if_exists(const ValueType& value)
{
auto [parent_node, node_with_value, parent_to_child] =
find_node(value, nullptr, root, direction::is_root);
if (node_with_value == nullptr)
return false;
if (node_with_value->left == nullptr)
{
auto old = node_with_value;
switch (parent_to_child)
{
case direction::left:
parent_node->left = node_with_value->left;
break;
case direction::right:
parent_node->right = node_with_value->right;
break;
case direction::is_root:
root = root->right;
}
delete old;
return true;
}
if (node_with_value->left->right == nullptr)
{
switch (parent_to_child)
{
case direction::left:
parent_node->left = node_with_value->right;
node_with_value->right->left = node_with_value->left;
break;
case direction::right:
parent_node->right = node_with_value->right;
node_with_value->right->left = node_with_value->left;
break;
case direction::is_root:
root->left->right = root->right;
root = root->left;
}
delete node_with_value;
return true;
}
auto [suitable_parent, suitable_node] =
find_suitable_node(node_with_value->left->right, node_with_value->left);
switch (parent_to_child)
{
case direction::left:
parent_node->left = suitable_node;
suitable_node->right = node_with_value->right;
suitable_node->left = node_with_value->left;
break;
case direction::right:
parent_node->right = suitable_node;
suitable_node->right = node_with_value->right;
suitable_node->left = node_with_value->left;
break;
case direction::is_root:
suitable_node->right = root->right;
suitable_node->left = root->left;
root = suitable_node;
}
suitable_parent->right = nullptr;
delete node_with_value;
return true;
}
void clear()
{
clear_helper(root);
}
void inorder_print(std::ostream& os)
{
if (root == nullptr)
return;
inorder_print_helper(os, root);
}
~binary_search_tree()
{
clear();
}
private:
std::pair<node*, node*> find_suitable_node(node* start_position, node* parent)
{
if (start_position->right == nullptr)
return {parent, start_position};
return find_suitable_node(start_position->right, start_position);
}
void clear_helper(node* start_position)
{
if (start_position == nullptr)
return;
clear_helper(start_position->left);
clear_helper(start_position->right);
delete start_position;
}
search_result find_node(const ValueType& value,
node* parent,
node* current_node,
direction parent_to_child)
{
if (current_node == nullptr)
return {nullptr, nullptr, direction::is_root};
if (current_node->value == value)
return {parent, current_node, parent_to_child};
if (value < current_node->value)
return find_node(value, current_node, current_node->left, direction::left);
else
return find_node(value, current_node, current_node->right, direction::right);
}
bool exists_helper(const ValueType& value,
node* current_node)
{
if (current_node == nullptr)
return false;
if (current_node->value == value)
return true;
if (value < current_node->value)
return exists_helper(value, current_node->left);
else
return exists_helper(value, current_node->right);
}
void inorder_print_helper(std::ostream& os,
node*& current_node)
{
if (current_node == nullptr)
return;
inorder_print_helper(os, current_node->left);
os << current_node->value << ' ';
inorder_print_helper(os, current_node->right);
}
bool try_insert_helper(const ValueType& value,
node*& current_node)
{
if (current_node == nullptr)
{
current_node = new node{value};
return true;
}
if (current_node->value == value)
return false;
if (current_node->value > value)
return try_insert_helper(value, current_node->left);
else
return try_insert_helper(value, current_node->right);
}
};
#include <iostream>
#include <sstream>
void test_remove_case_one()
{
binary_search_tree<int> tree;
tree.try_insert(2);
tree.try_insert(3);
tree.try_insert(1);
tree.try_insert(4);
tree.try_insert(-2);
tree.try_insert(0);
tree.delete_if_exists(3);
std::ostringstream oss;
tree.inorder_print(oss);
if (oss.str() != "-2 0 1 2 4 ")
throw std::logic_error("remove case one fails");
}
void test_remove_case_two()
{
binary_search_tree<int> tree;
tree.try_insert(4);
tree.try_insert(7);
tree.try_insert(11);
tree.try_insert(1);
tree.try_insert(-2);
tree.try_insert(0);
tree.delete_if_exists(4);
std::ostringstream oss;
tree.inorder_print(oss);
if (oss.str() != "-2 0 1 7 11 ")
throw std::logic_error("remove case two fails");
}
//almost like case 2, but has three added in it
void test_remove_case_three()
{
binary_search_tree<int> tree;
tree.try_insert(4);
tree.try_insert(7);
tree.try_insert(11);
tree.try_insert(1);
tree.try_insert(-2);
tree.try_insert(0);
tree.try_insert(3);
tree.delete_if_exists(4);
std::ostringstream oss;
tree.inorder_print(oss);
if (oss.str() != "-2 0 1 3 7 11 ")
throw std::logic_error("remove case two fails");
}
int main(){
std::cout << "running remove case 1...\n";
test_remove_case_one();
std::cout << "remove case 1 passed successfuly\n";
std::cout << "running remove case 2...\n";
test_remove_case_two();
std::cout << "remove case 2 passed successfuly\n";
std::cout << "running remove case 3...\n";
test_remove_case_three();
std::cout << "remove case 3 passed successfuly\n";
}
## Explanations
I believe the implementation guidelines are very important, so I decided to keep them here, as the safest place to keep notes for me is SE posts (I know it is quite weird). I included pictures for remove, as it is not as straightforward as others (pictures go over the same examples as in the code).
### insert
Quite easy. Launch a recursion. If the current node is nullptr, insert at this node and return true (keep in mind that all pointers are passed by reference, thus the change will be reflected in the data structure itself. Also they always exist, no dangling references). If the value to-be-inserted is less than value in the node (IN THIS ORDER!), search right location to insert in the left subtree. If greater, search the location in the right subtree. If the value is equal, then return false.
### exists
Almost the same like insertion, but the true/false cases are reversed. If value is equal to the value in the node, return true. If node is nullptr, return true (nowhere else to search).
### remove
While searching for the node-to-remove, return these important three values: parent node of the target node, target node itself, the direction from parent to target child. The suitable child to replace with the to-be-removed node is the one that is greater than left child and less than right child. If the value is in the tree, there are 3 cases (others are covered by these):
• to-be-removed node doesn't have left child (doesn't have suitable child)
Easy case. Relink parent of the to-be-removed node to the right child of the to-be-removed node (keep in mind DIRECTION!). Delete the to-be-removed node. Don't forget to update root if the to-be-removed node is root.
• to-be-removed node has left child, but the child doesn't have right child (has suitable child which is left child of to-be-removed node)
Somewhat easy too. Relink parent to the suitable child, change the right child of suitable child to the right child of to-be-removed node. Delete to-be-removed node. Update root if it is affected.
• to-be-removed node has suitable child which is far (> 1 depth) away
Find the rightmost child of the left child of to-be-removed node. Make sure the parent of the suitable child is no longer linked to the suitable child. Relink parent of the to-be-removed node to the suitable child. Relink left and right of the to-be-removed node the left and right of the suitable child, respectively. Delete to-be-removed node.
## clear
If current node is nullptr, return. Go to the left, then to the right. Finally delete the current node.
• Let me know if pictures are too big. I'll try to do something about it tomorrow. Sorry for the terrible handwriting. – Incomputable Jun 3 '18 at 22:13
• I was actually going to comment on how nice your handwriting is! – erip Jun 3 '18 at 23:46
• @erip, thanks! I believe I should work on the style consistency (e.g. same height of lower case letters and same width of the same letters). I've got 20 more lectures to go through on youtube, so I'll probably be taking more notes. – Incomputable Jun 3 '18 at 23:56
The first of your problems, namely code duplication is stemmed from the wrong interface. exists_helper returning a boolean violates a very important (and unfortunately little known) principle: do not throw away the information you computed. In case of exists_helper computed is an insertion point for the value you did not find. Returning that makes insert_helper, and the find, and delete thin wrappers around exists_helper.
Along the same line, insert shall return an indication of success/failure along with the node itself, STL style. It doesn't really matter in case of integers, but for more complex ValueType we are usually interested in what did prevent insertion.
I don't approve recursive nature of exists and insert. They are tail recursive, and better be written as iterative.
Regarding deletion, I think you over engineered it. Consider a pseudocode:
if left child exists,
find the rightmost descendant
rightmost descendant's->right = right child
return left child as new root
else
return right child as new root
Note that your case 2 doesn't need to be special cased: left child itself is the rightmost descendant.
• I remember the principle being mentioned by Marshall Clow on one of the CppCon talks. I agree that I could make some functions iterative. I'll add input iterator support, lower_bound and upper_bound and then will post again with the suggestions from this post. – Incomputable Jun 4 '18 at 0:26
• I once wrote a tail-recursive algorithm for binary trees, then rewrote it as an iterative algorithm. The recursive version was faster. Maybe it’s me... :) – Cris Luengo Jun 4 '18 at 1:33
binary_search_tree() :
root(nullptr)
{}
Use default initialization of the members, in-line in the class. If you have:
node* root = nullptr;
then your default constructor will be generated automatically.
enum class direction
{
is_root,
left,
right
};
Last time I implemented a type of binary tree (the outer layer of a “T Tree”, with auto-balancing inserts and deletes) I took advantage of the symmetry between left and right to not write all the code twice. Instead of separately named left and right child nodes, I made an array of 2 children, so I have child[0] and child[1].
All the logic that deals with left vs. right is mirrored with the key less vs greater. So I implement it once using the abstract S and D instead of Left and Right, and switch which is which in the key comparison. That is, on the less-than case, S is left and D is right; on the greater-than case, the other way around. Since their values are 0 and 1, the conditional code just sets S and then D is always 1-S. child[S] will be either left or right, depending on the choice. Follow me?
Although, it is the balancing code that is most of the work (and this eliminated many test cases too since it was all the same). If you do AVL tree or the like next, keep that in mind!
parent_to_child is unusual. It means you need the follow-up switch statement in delete_if_exists and the extra state in direction (or maybe the whole enum at all, in your case). I think it is more normal to return a pointer to the parent as well as a pointer to the found node (if any). That also works when you do not have a parent pointer in each node (the purest form of binary tree does not).
if (node_with_value == nullptr)
if (node_with_value->left == nullptr)
Don’t compare directly against nullptr. Use the contextual conversion to bool as a truth value that is meant for this purpose. This is especially when you start using smart pointers, as they implement an efficient operator bool rather than needing to do a comparison directly. | {} |
# Distinct Numbers
CUET CSE Fest 2022 - Inte...
Limits 1s, 512 MB · Interactive
This is an Interactive Problem.
The judge has a secret array $A$ of size $n$.
You can ask the judge at most $10*n$ queries, and after that, you have to figure out the number of distinct elements in the array.
Query: To do a query you will give the judge two integers $i, j(1 \leq i, j \leq n)$, the judge will reply with a character indicating the relation of the elements $A_i$ and $A_j$. There can be 3 types of responses,
1. < , if $A_i
2. > , if $A_i > A_j$
3. = , if $A_i = A_j$
## Input
Interaction Details:
The interaction starts with reading an integer $n(1 \leq n \leq 1000)$, the size of the secret array.
After that to do a query print “? i j” in a line and flush the output stream.
Read a character in a line, the response from the judge.
When you are ready to answer, print “! x” in a line. Here x is the number of distinct elements in $A$. [Note: This does not count as one of the queries.]
After that flush the output stream and finish!
To flush the output stream, you may use:
• fflush(stdout) in C/C++
• stdout.flush() in Python
## Output
A sample interaction is given below if the secret array was [1, 2, 1, 3]
>> 4
<< ? 1 4
>> <
<< ? 1 3
>> =
<< ! 3
Another one if the secret array was [1, 1, 1, 2, 1, 1]
>> 6
<< ? 1 2
>> =
<< ? 3 4
>> <
<< ? 5 6
>> =
<< ! 2
Here '>>' indicates what your program reads and '<<' indicates what your program writes. These symbols are here to make it easy to understand. You do not have to print such symbols from your program. | {} |
# Notable Natives from Wisconsin Indian Tribes
Overview / Description:
Students will conduct research to learn about notable native people from one of the eleven tribes of Wisconsin and create a poster.
Learning goals/objectives:
After completing this activity, students should be able to . . .
Name at least three notable Native American's from any of the eleven tribes from Wisconsin. Who are they? Which tribe do they belong to? What makes them a notable person?
Content Standards:
1. Cite several pieces of textual evidence to support analysis of what the text says explicitly as well as inferences drawn from the text.
Determine a theme or central idea of a text and analyze its development over the course of the text; provide an objective summary of the text.
Writing Standards 6–12
2. Write informative/explanatory texts to examine a topic and convey ideas, concepts, and information through the selection, organization, and analysis of relevant content.
a. Introduce a topic clearly, previewing what is to follow; organize ideas, concepts, and information, using strategies such as definition, classification, comparison/contrast, and cause/ effect; include formatting (e.g., headings), graphics (e.g., charts, tables), and multimedia when useful to aiding comprehension.
b. Develop the topic with relevant facts, definitions, concrete details, quotations, or other information and examples.
c. Use appropriate transitions to create cohesion and clarify the relationships among ideas and concepts.
d. Use precise language and domain-specific vocabulary to inform about or explain the topic.
e. Establish and maintain a formal style.
f. Provide a concluding statement or section that follows from and supports the information or explanation presented.
Research to Build and Present Knowledge
7. Conduct short research projects to answer a question, drawing on several sources and generating additional related, focused questions for further research and investigation.
Materials:
Assessment
Did student use step sheet to assist with poster? Does poster include all the required elements from the step sheet?
Wrap-Up:
Hang up posters and/or make a digital presentation of posters
Extension Activity (for intervention or enrichment): | {} |
# Convert into vector form
1. Jul 11, 2010
### Mentallic
1. The problem statement, all variables and given/known data
How do I convert $$ax_1+bx_2+cx_3+d=0$$ into vector form?
3. The attempt at a solution
I am completely at a loss here, mainly because I don't quite understand vector geometry.
2. Jul 11, 2010
### HallsofIvy
Re: Vectors
If we are to assume that "$x_1$", "$x_2$", and "$x_3$" are components of a vector then that equation would be written $<a, b, c> \cdot <x_1, x_2, x_3>+ d= 0$ where the first term is a "dot product".
3. Jul 11, 2010
### Mentallic
Re: Vectors
I better go read up on dot products then. Thanks.
4. Jul 11, 2010
### Mentallic
Re: Vectors
Before I go on, x1, x2 and x3 are just variables in 3 dimensions such as x,y,z. Not exactly sure if that is what you were assuming.
Ok so given the formula for a dot product of two vector a and b is $$|a||b|cos\theta$$ then we have $$\sqrt{(a^2+b^2+c^2)(x_1^2+x_2^2+x_3^2)}cos\theta+d=0$$
This doesn't seem right... I don't know how to find the angle between each vector and this isn't anywhere near the kind of answer I'm looking for, it should be of a form similar to this:
$$<x_1,x_2,x_3>=<0,0,d>+\lamda<a,0,0>$$
Although I'm possibly just using the dot product all wrong.
5. Jul 11, 2010
Re: Vectors
That's true; however, there's a much simpler definition of the dot product in this case:
$$<a,b,c> \cdot <x_1,x_2,x_3> = ax_1+bx_2+cx_3.$$
As an additional remark, note that, for a plane in $$R^3$$, we have the following:
$$\vec{\textbf{n}} \cdot \vec{\textbf{x}} = 0,$$ where $$\vec{\textbf{n}} = <a,b,c>$$ is a normal vector to the plane and $$\vec{\textbf{x}} = <x_1,x_2,x_3>$$ is any point on the plane. This is intuitive when we consider the definition of the dot product that you provided. The angle between any point on the plane and a corresponding normal vector is 90 degrees. Thus, $$\cos(\theta) = \cos(90) = 0.$$
I hope this helps. | {} |
Instructions for ASU collaborators
These instructions are tailored to the ASU Transportation AI Lab.
The most important tip: ask questions! File an issue, email dabreegster@gmail.com, or ask for a Slack invite.
Installing
A new version is released every Sunday, but you probably don't need to update every week.
1. Go to https://github.com/a-b-street/abstreet/releases and download the latest .zip file for Windows, Mac, or Linux.
1. Unzip the folder and run play_abstreet.sh or play_abstreet.bat. If you get security warnings, see here.
1. On the main title screen, click Sandbox. This starts in Seattle by default, so change the map at the top.
1. Choose USA, then Phoenix.
1. You'll be prompted to download some files. It should be quick. After it's done, click Phoenix again.
You've now opened up the Tempe map!
A shortcut and improving the simulation in Tempe
On Windows, edit run_abstreet.bat and change the last line to:
game.exe --dev data/system/us/phoenix/maps/tempe.bin --infinite_parking 1> ..\\output.txt 2>&1
On Mac, edit run_abstreet.sh and change the last line to:
RUST_BACKTRACE=1 ./game --dev data/system/us/phoenix/maps/tempe.bin --infinite_parking 1> ../output.txt 2>&1
--dev data/system/us/phoenix/maps/tempe.bin will skip the title screen and start on the Tempe map by default; this will save you lots of time.
--infinite_parking disables the parking simulation. By default, there's an unrealistic amount of people walking around Tempe just to reach the spot where their car is parked. We don't have good data yet about on- and off-street parking, so it's best to just make driving trips begin and end at buildings or off the map, without a step to search for parking.
There are a bunch of other startup parameters you can pass here too.
Importing a Grid2Demand scenario
When you run https://github.com/asu-trans-ai-lab/grid2demand, you get an input_agents.csv file. You can import this into A/B Street as a scenario.
1. Change the traffic from none
1. Click import Grid2Demand data
1. Choose your input_agents.csv file
2. A new scenario will be imported. Later you can launch this from the same menu; the scenario will be called grid2demand
Grid2Demand needs a .osm file as input. The extract of Tempe that A/B Street uses is at https://abstreet.s3.us-east-2.amazonaws.com/dev/data/input/us/phoenix/osm/tempe.osm.gz. Note the file is compressed.
Modifying a scenario
You can transform a scenario before simulating it. This example will cancel all walking and biking trips from the scenario, only leaving driving and public transit.
1. After loading a scenario, click 0 modifications to traffic patterns
1. Click Change trip mode
2. Select the types of trips that you want to transform, and change them to cancel. Click Apply.
Importing Vol2Timing data
https://github.com/asu-trans-ai-lab/Vol2Timing/ produces timing.csv files that you can import into A/B Street.
1. Open the traffic signal editor for an intersection in A/B Street.
2. Click Edit entire signal
3. Choose import from a new GMNS timing.csv, then pick your file.
The import process isn't finished yet; some movements aren't matched properly, some movements are incorrectly marked as protected, and no crosswalks are imported yet. When you import, some error messages may be displayed, and others might wind up printed to STDOUT (captured in output.txt on Windows).
If you want to import timing for more intersections in the same map, after Edit entire signal, you should also have n option like import from GMNS C:\path\to\timing.csv.
Debugging timing.csv
Along with QGIS, you can also visualize timing.csv in A/B Street directly.
1. From the title screen, choose Internal dev tools.
2. Change the map if necessary.
3. Click view KML.
4. Click load KML file, then choose your timing.csv.
5. Each movement is drawn in red. You can hover over a line-string to see its attributes, and click to open all details.
1. Using the key=value filter on the left, you can type in no=3 to match stage_no=3 and easily check what movements belong to each stage. | {} |
## Latex test
Page for tests
1. wayne says:
Muchas gracias Tim! Much needed.
I think the parser might not like the \left [ and the \right ] in the script. Will look for some more explanation as to what is legal and what is not. WP was scanty on their latex page. Both of those were fine on some of the latex editors on line:
Latex editor
2. tchannon says:
Never know, might help with part of a tootorial for the Talkshop help pages.
3. wayne says:
Okay, does this make it happy?
$\Phi _{T_s}\approx\left (\frac{\mu-1}{2^\mu+1} (OLR+\frac{column\;mass}{2^\mu \sqrt{2^\mu}-\frac{1}{2} }) \right )^\frac{1}{\mu}\approx \sqrt[4]{\frac{3}{17} (I+\frac{m}{64} ) }$
[moderator adds a graphic file of what the html looks like ]
4. wayne says:
OK… super, super… much better. Thanks Tim. It was the square brackets.
Once that equation not only matched Earth and Venus’s surface temperature from OLR but also matched (~) the temperature of the Earth’s core (7530 K) from the surface irradiance of ~390 and ‘mass of Earth / area of Earth’ this needs to be taken a bit more seriously — mass and radiation power , radiation power and mass, as core units kg/s³ and kg/s³ in both cases. Now it is approaching the way I tend to look at physics, beautiful simple symmetry, that is if you can get rid of the human influence of such complexity.
Why has this never been raised by anyone?
5. wayne says:
Seems to narrow down to something more like this. Will WordPress parse it? TB & Tim, I will write a top-post on this if I can just carry this relationship one more step and into reality. Still don’t understand why it exists, so… will still currently call it a simple coincidence but a doozy!🙄
$T=\sqrt[4]{\frac{\sqrt[1/3]{\sigma}\cdot 4\cdot m+I}{\sigma}}$
6. wayne says:
Comma terms?
$T=\sqrt[4]{\varsigma\cdot m+\frac{I}{\sigma}}\,,\;\;\varsigma=4\sigma^\frac{-2}{3}$
7. wayne says:
Try a leading {\}left{.} … seems to stabalize the overall display size:
$\displaystyle\large\left.\frac{dp}{dh}=-\rho\,g$
Need a space at all or compressible?
$latex\displaystyle\large\left.\frac{dp}{dh}=-\rho\,g$
8. wayne says:
$\displaystyle \large \frac{dp}{dh}=-\rho\,g$
9. wayne says:
No space after dollarsign latex:
$latex\displaystyle\large c_p=\frac{\gamma-1}{\gamma}\frac{m}{\bar{R}}$
Space after dollarsign latex and all compressed:
$\displaystyle\large c_p=\frac{\gamma-1}{\gamma}\frac{m}{\bar{R}}$
10. wayne says:
Need \displaystyle\large at all?
$c_p=\frac{\gamma-1}{\gamma}\frac{m}{\bar{R}}$
11. wayne says:
Just \large:
$\large c_p=\frac{\gamma-1}{\gamma}\frac{m}{\bar{R}}$
12. wayne says:
Better! You do need the \displaystyle, what of missing \large:
$\displaystyle c_p=\frac{\gamma-1}{\gamma}\frac{m}{\bar{R}}$
13. wayne says:
Does -pre- tag work now? No tabs and monospace font aligned with a style=font-family: “Lucida Console, Courier New” override:
1885 1985 1885 1985
W W W/m² W/m²
Incidence Solar 2.50E+17 1.70E+17 490.13 333.29
Reflected Solar 9.00E+16 6.00E+16 176.45 117.63
Solar abs by earth & atm 1.60E+17 1.10E+17 313.68 215.66
Solar to heat earth & atm 1.00E+17 7.30E+16 196.05 143.12
Power to evap 4.20E+16 4.00E+16 82.34 78.42
Rainfall 4.00E+15 7.84
Land 1.50E+14 0.29
Wind & Currents 4.15E+15 2.00E+15 8.14 3.92
Core to Surface 2.80E+13 3.20E+13 0.05 0.06
Tides 4.40E+11 3.00E+12 0.00 0.01
Power in plants 1.50E+13 4.00E+13 0.03 0.08
Power in fossil fuels 2.00E+12 8.00E+12 0.00 0.02
14. wayne says:
I see no way around this mis-formatting or -pre- tags. Still there and just ignores the pre-formatting by insisting on a non-monospaced face. I give up.
But TB, theres a look at Hertz’s view of energy in 1885 in W/m2.
15. tallbloke says:
The way to fix it is to put the html tag < code > round it. I’ve added that to your table and it looks right now. Good old wordpress
http://codex.wordpress.org/Writing_Code_in_Your_Posts
16. wayne says:
Thank Rog! A -code- tag it will be. Just happened over here to test a new latex tag \dfrac to see if wordpress will allow it. Was thinking on something Will said and thought he would get a real kick out of this string of releated logic. -har-de-har-🙄
$E = m \, c^{2}$ and true $\dfrac{ E }{ c } = m \, c$ and $m \, c = m \, v = p =$ momentum when at the speed of light so that implies that $E = p \, c$, but we also learn that $E$ is equal to $\dfrac{p \, v}{2} = \dfrac{m \, v \, v}{2} = \frac{1}{2} \, m \, v^{2}$ which does not quite equal the original equation by a factor of two so could that 1/2 be the constant you spoke of Will?😉
Just kidding… kind of!
See more on E=mc² here:
http://www.science20.com/hammock_physicist/whats_wrong_emc2
Now, try that on WordPress.
17. wayne says:
TEST: note, can wp.latex understand all of the special tags used in this example and does it handle the type sizing correctly?
$\displaystyle\left(\dfrac{\mathrm d E}{\mathrm d t}\right)_{sys} = \dot{Q}_{net\;in}\;-\;\dot{W}_{net\;out}\;-\;\int\limits_{CS}P\,(\vec{V}\bullet\hat{n})\,\mathrm d A$ | {} |
11
views
0
recommends
+1 Recommend
0 collections
0
shares
• Record: found
• Abstract: found
# Performance-based physician reimbursement and influenza immunization rates in the elderly. The Primary-Care Physicians of Monroe County.
ScienceOpenPubMed
Bookmark
There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.
### Abstract
To investigate the effect of performance-based financial incentives on the influenza immunization rate in primary care physicians' offices. Randomized controlled trial during the 1991 influenza immunization season. Rochester, New York, and surrounding Monroe County during the Medicare Influenza Vaccine Demonstration Project. A total of 54 solo or group practices that had participated in the 1990 Medicare Demonstration Project. All physicians in participating practices agreed to enumerate their ambulatory patients aged 65 or older who had been seen during the 1990 or 1991 calendar years, and to track the immunization rate on a weekly basis using a specially designed poster from September 1991 to January 1, 1992. Additionally, physicians agreed to be randomized, by practice group, to the control group or to the incentive group, which could receive an additional $.80 per shot or$1.60 per shot if an immunization rate of 70% or 85%, respectively, was attained. The main outcome measures are the 1991 immunization rate and the improvement in immunization rate from the 1990 to 1991 influenza seasons for each group practice. For practices in the incentive group, the mean immunization rate was 68.6% (SD 16.6%) compared with 62.7% (SD 18.0%) in the control group practices (P = .22). The median practice-specific improvement in immunization rate was +10.3% in the incentive group compared with +3.5% in the control group (P = .03). Despite high background immunization rates, this modest financial incentive was responsible for approximately 7% increase in immunization rate among the ambulatory elderly.
### Author and article information
###### Journal
9631159
Chemistry | {} |
# Contravariant Vector (newbie in trouble)
• August 3rd 2012, 10:27 AM
AA23
Contravariant Vector (newbie in trouble)
Hello everybody, great forum. This is my first post so any help you can provide would be great (Hi)
I have attached the problem with this post, the first part has been solved but I am confused for the second part.
I know all the inverse differential coefficients but am unsure how to use them.
• August 3rd 2012, 11:35 AM
GJA
Re: Contravariant Vector (newbie in trouble)
Hi, AA23.
I'm a little confused on the notation in the handout. It says the contravariant vector field is $V^{\mu}=(x-y, x^{2}+y)^{T}$. However, $\mu$ is a summation index in the rule for contravariant transformation. I think the vector field should be just $V$, then $V^{1}=1$ at (3,2) and $V^{2}=11$ at (3,2). Am I understanding this right? If I'm correct does this help clear up what was confusing you?
Good luck!
• August 3rd 2012, 11:45 AM
AA23
Re: Contravariant Vector (newbie in trouble)
Hi GJA, I see what u mean about it being a summation index in the rule for contravariant transformation, however that is the way the exam question has been written. I'm unclear as to how I should use the information established along with the inverse differential coefficients to establish the correct answer, please see attachment below for calculations.
Thanks again Attachment 24414
• August 3rd 2012, 12:53 PM
GJA
Re: Contravariant Vector (newbie in trouble)
Hi again, AA23. I think I see what needs to be done here. The notation in the worksheet could use a little improving (mentioned last post). Also, a negative sign got dropped in $\frac{\partial \theta}{\partial x}$; it should be
$\frac{\partial\theta}{\partial x}=-\frac{\sin\theta}{r}$.
In fairness, typing all the sub/super-scripts and forumlas perfectly is never easy in geometry - particularly in MS Word.
Now for your exercise the "barred coordinates" are the contravariant components in polar coordinates $(r, \theta)$, and the "unbarred coordinates" are the contravariant components in Cartesian coordinates $(x,y)$. So, in this example,
1. $u^{1}=x$
2. $u^{2}=y$
3. $z^{1}=r$
4. $z^{2}=\theta$
Since your instructor is directing us to compute $\frac{\partial\theta}{\partial x}$ and $\frac{\partial\theta}{\partial y}$, we are figuring out what $\overline{V^{2}}$ is at the point (3,2). Expanding the summation in the contravariant transformation formula for $\overline{V^{2}}$, and using 1, 2 & 4 above, we have
5. $\overline{V^{2}}=\frac{\partial\theta}{\partial x}\cdot V^{1} +\frac{\partial\theta}{\partial y}\cdot V^{2}$
Now we know the following:
6. $V^{1}=1$ at (3,2)
7. $V^{2}=11$ at (3,2)
8. $\frac{\partial\theta}{\partial x}=-\frac{\sin\theta}{r}=-\frac{\sin\theta}{\sqrt{13}}$ at (3,2)
9. $\frac{\partial\theta}{\partial y}=\frac{\cos\theta}{r}=\frac{\cos\theta}{\sqrt{13 }}$ at (3,2).
The thing that remains for you to determine is what are $\cos\theta$ and $\sin\theta$ at (3,2)? Once you figure this out you will know 8 & 9. After figuring out 8 & 9 you plug 6-9 in equation 5 and if everything was done correctly you should get your instructor's solution of 31/13.
Let me know if there are any other questions. Good luck!
• August 3rd 2012, 01:19 PM
AA23
Re: Contravariant Vector (newbie in trouble)
After establishing values for sin theta and cos theta and substituting them back into equation 5 I do indeed get 31/13.
However in the answer given it was, 25/ SQRT 13 , 31/13
What is the 25/ SQRT 13 ?
Thank you once again
• August 3rd 2012, 01:27 PM
GJA
Re: Contravariant Vector (newbie in trouble)
$25/\sqrt{13}$ is the contravariant component in the r-coordinate. Basically, what we just did was to get the theta portion of the coordinate, but you still need to determine the r part. You do the same thing we just did, only this time you will need to use $\frac{\partial r}{\partial x}$ and $\frac{\partial r}{\partial y}$ at (3,2). I will skip writing the details for now; I think you can do it if you follow the model of the previous part. Give it an honest effort, and if you're stuck I'll give more details. Good luck!
• August 3rd 2012, 01:38 PM
AA23
Re: Contravariant Vector (newbie in trouble)
Thank you once again GJA, I copied the previous method and obtained 25 / SQRT 13. (Rock)
• August 5th 2012, 06:23 AM
AA23
Re: Contravariant Vector (newbie in trouble)
Just some further questions regarding this topic, if for example I was asked to find the following but for a covariant vector, would this be correct?
Please see attachments for my attempt (Nod)
• August 6th 2012, 04:19 AM
AA23
Re: Contravariant Vector (newbie in trouble)
• August 6th 2012, 05:17 AM
GJA
Re: Contravariant Vector (newbie in trouble)
Hi, AA23. The second part looks fine. The MHF hasn't been allowing me to respond after I post a couple of things on a thread for some reason. Don't know why it's letting me do it now.
• August 6th 2012, 05:38 AM
AA23
Re: Contravariant Vector (newbie in trouble)
Hi GJA, thank you for responding. I actually believe I've made an error, I think it should be 23/SQRT17 , 7.
I had the sin and cos values in the wrong place for the r coordinate calculation.
I do have a follow up question, how do I go about establishing the following (see attachment). I imagine it is an identical method with just a few changes (Nod)
• August 8th 2012, 07:20 AM
AA23
Re: Contravariant Vector (newbie in trouble)
Anyone have any advice on answering the question above, I've tried a similar technique to above but am sure is incorrect. Thank you in advance | {} |
Difference between relaxation and resonance leading to an absorption spectral feature?
I need help understanding the physics behind this insightful comment below the question Does water really have strong EM absorption at 3 kHz in solid and 2 GHz in liquid? Why the huge shift?:
Please note: This is a relaxation spectrum. The transitions do not come from a resonance (absorption, like in NMR, IR, etc.), but an applied electric field has a lossy (at low frequencies) or elastic (at higher f.) effect on some mode of molecular orientation of electrical dipoles in your sample.
The following comment links to https://en.wikipedia.org/wiki/Dielectric_spectroscopy which confirms this.
Both mechanisms can to a peak in the imaginary part of the dielectric permittivity at a certain frequency or energy, and at this point the slope of real part is steepest.
But is there a simple yet intuitive way to understand the physical difference between a spectral absorption feature due to a resonance process versus a relaxation process?
• Two things. Concerning the actual example, the link and comments should suffice. It is the nature or level of interaction that differs. The interaction between the applied em field and the sample results in a different orientation of the dipoles in the sample and it is not quantised at molecular level. Each same dipole would equally interact with the field as in a merely electrostatic interaction. The spectroscopic character is due to scanning the frequency at which the field is reversed, so that different samples adapt to it differently, promptly or not, or even not at all. Jul 15, 2019 at 9:07
• Concerning terminology, relaxation can be encountered also in a more molecular spectroscopy context, when indeed transitions due to resonance between the applied em field and quantised energetic levels (electronic, vibrational, rot. and their mixes) take places. Except that you won't scan for them, but parameters like the time needed for the sample to return back. A simple example: you excite a molecule at an energy you know it suffices to give luminescence emissions, than you follows the luminescence decay in time. Here you have a resonance process to pump a relaxation phenomenon. Jul 15, 2019 at 9:19
• For me nothing. The overall result is well discussed in the linked answer. For you: you are the only one knowing if my comments were useful or not. I did compress anything. Just unable to formulate a nicely concise answer. If the comments weren't useful I can delete them. Let me know. Jul 15, 2019 at 9:33
• Didn't notice you weren't polemic so perhaps I have been so in last comment. But it answers well. I do have an intuitive answer, the problem is to convey that. I am confident I went the right direction. I can add one more thing: in the original Q you mention absorption peak. But while this might be done in a colloquial scenario, the peaks are in Permittivity, at the end. Not Absorbance. I think this note helps, too. Jul 15, 2019 at 9:49
• @Karl distinction without a difference. If you have a sample of ordinary matter and you have a non-zero imaginary part of the permittivity you have absorption, and vice-versa. Use $(n+ik)^2=\epsilon'+i\epsilon"$ to get $k$ then Beer–Lambert to get absorption of power per unit depth.
– uhoh
Jul 17, 2019 at 12:46
The difference can be viewed in two ways, I guess.
Firstly, for relaxation spectroscopy, you apply a forced oscillation on your sample. That can be mechanical (dynamic rheology, mHz to .1kHz, several kHz with very special, not commerically available equipment) or electrical (dielectric spectroscopy, low mHz to GHz range). The mathematical concepts behind both methods are very similar. For vibrational, MR, UV, etc. spectroscopy, the sample does just not interact with frequencies outside of the linewidth of some QM transition. The width of a resonant peak is given by the energy needed for the transition, and the lifetime of the excited state. The width of a feature in a relaxation spectrum is typically at least half an order of magnitude, from "sample can follow the external stimulus and stay close to a (dynamic) equillibrium" to "external stimulus is much too fast for the sample to follow at all".
The other point is that in relaxation spectroscopy, energy is transferred and dissipated into the sample below a specific frequency (invariably as heat), and reflected (elastically) at higher frequencies. The energy is not quantised, because the induced change in the sample is translatory. The peak frequency itself has no real significance, it's inverse is called a "relaxation time" of the sample. A statistical quantity, mostly. For QM transitions, energy is only transmitted at the given frequency/energy, and nothing happens otherwise, except Rayleigh scattering.
Basically, resonant spectroscopy deals with individual quantum mechanical species, and relaxation spectroscopy deals with the classical, unquantised newtonian properties of the whole sample. NMR is a bit of a hybrid: The induced transitions are purest QM, but you can only observe the classic induction of the ensemble, never individual photons.
Technically, relaxation spectra are usually taken by applying the stimulus at discrete frequencies, stepwise, and you want to have a phase-sensitive detector (voltage - current, deformation - torque).
• Thank you for your post, I will now give this some thought. It would be great if you could add a supporting link or two to provide an opportunity for further reading for me and future readers. In both cases you apply an electromagnetic stimulus using an antenna or pair of plates or something similar; why would it be called a forced oscillation only for relaxation spectroscopy and not in the case of resonant spectroscopy?
– uhoh
Jul 15, 2019 at 23:41
• Well, outside of resonance, oscillation can only be forced. The difference is that you are actually interested in the nonresonant reaction of the sample. In e.g. optical spectroscopy, the sample also interacts with off frequencies, by elastic (Raleigh) scattering.
– Karl
Jul 16, 2019 at 6:35 | {} |
probability-0.2.5.2: Probabilistic Functional Programming
Numeric.Probability.Example.Kruskal
Description
Given a row of n (~50) dice, two players start with a random dice within the first m (~5) dice. Every player moves along the row, according to the pips on the dice. They stop if a move would exceed the row. What is the probability that they stop at the same die? (It is close to one.)
Wuerfelschlange (german): http://www.math.de/exponate/wuerfelschlange.html/
Synopsis
# Documentation
type Die = Int Source #
die :: (C prob experiment, Fractional prob) => Score -> experiment Die Source #
type Score = Int Source #
game :: (C prob experiment, Fractional prob) => Score -> Score -> (Score, Score) -> experiment (Maybe Score) Source #
We reformulate the problem to the following game: There are two players, both of them collect a number of points. In every round the player with the smaller score throws a die and adds the pips to his score. If the two players somewhen get the same score, then the game ends and the score is the result of the game (Just score). If one of the players exceeds the maximum score n, then the game stops and players lose (Nothing).
gameFastFix :: Score -> Score -> Dist (Score, Score) -> Dist (Maybe Score) Source #
This version could be generalized to both Random and Distribution monad while remaining efficient.
In gameFastFix we group the scores by rounds. This leads to a growing probability distribution, but we do not need the round number. We could process the game in a different way: We only consider the game states where the lower score matches the round number.
gameLeastScore can be written in terms of a matrix power. For n pips we need a n² × n² matrix. Using symmetries, we reduce it to a square matrix with size n·(n+1)/2.
p[n+1,(n+1,n+1)] p[n,(n+0,n+0)] | p[n+1,(n+1,n+2)] | | p[n,(n+0,n+1)] | | p[n+1,(n+1,n+3)] | | p[n,(n+0,n+2)] | | ... | | ... | | p[n+1,(n+1,n+6)] | = M/6 · | p[n,(n+0,n+5)] | | p[n+1,(n+2,n+2)] | | p[n,(n+1,n+1)] | | ... | | ... | | p[n+1,(n+2,n+6)] | | p[n,(n+1,n+5)] | | ... | | ... | p[n+1,(n+6,n+6)] p[n,(n+5,n+5)]
M[(n+1,(x,y)),(n,(x,y))] = 6
M[(n+1,(min y (n+d), max y (n+d))), (n,(n,y))] = 1
M[(n+1,(x1,y1)),(n,(x0,y0))] = 0
cumulate :: Ord a => Dist (Maybe a) -> [(Maybe a, Probability)] Source #
trace :: Score -> [Score] -> [Score] Source #
chop :: [Score] -> [[Score]] Source #
bruteforce :: Score -> Score -> (Score, Score) -> T (Maybe Score) Source #
This is a bruteforce implementation of the original game: We just roll the die maxScore times and then jump from die to die according to the number of pips. | {} |
# Trending arXiv
Note: this version is tailored to @Smerity - though you can run your own! Trending arXiv may eventually be extended to multiple users ...
### Papers
1 2 3 4 5 6 7 8 9 32 33
#### Phrase-Based & Neural Unsupervised Machine Translation
Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, Marc'Aurelio Ranzato
Machine translation systems achieve near human-level performance on some languages, yet their effectiveness strongly relies on the availability of large amounts of bitexts, which hinders their applicability to the majority of language pairs. This work investigates how to learn to translate when having access to only large monolingual corpora in each language. We propose two model variants, a neural and a phrase-based model. Both versions leverage automatic generation of parallel data by backtranslating with a backward model operating in the other direction, and the denoising effect of a language model trained on the target side. These models are significantly better than methods from the literature, while being simpler and having fewer hyper-parameters. On the widely used WMT14 English-French and WMT16 German-English benchmarks, our models respectively obtain 27.1 and 23.6 BLEU points without using a single parallel sentence, outperforming the state of the art by more than 11 BLEU points.
Captured tweets and retweets: 4
#### EMBER: An Open Dataset for Training Static PE Malware Machine Learning Models
Hyrum S. Anderson, Phil Roth
This paper describes EMBER: a labeled benchmark dataset for training machine learning models to statically detect malicious Windows portable executable files. The dataset includes features extracted from 1.1M binary files: 900K training samples (300K malicious, 300K benign, 300K unlabeled) and 200K test samples (100K malicious, 100K benign). To accompany the dataset, we also release open source code for extracting features from additional binaries so that additional sample features can be appended to the dataset. This dataset fills a void in the information security machine learning community: a benign/malicious dataset that is large, open and general enough to cover several interesting use cases. We enumerate several use cases that we considered when structuring the dataset. Additionally, we demonstrate one use case wherein we compare a baseline gradient boosted decision tree model trained using LightGBM with default settings to MalConv, a recently published end-to-end (featureless) deep learning model for malware detection. Results show that even without hyper-parameter optimization, the baseline EMBER model outperforms MalConv. The authors hope that the dataset, code and baseline model provided by EMBER will help invigorate machine learning research for malware detection, in much the same way that benchmark datasets have advanced computer vision research.
Captured tweets and retweets: 1
#### Training a Ranking Function for Open-Domain Question Answering
Phu Mon Htut, Samuel R. Bowman, Kyunghyun Cho
Captured tweets and retweets: 2
Noam Shazeer, Mitchell Stern
Captured tweets and retweets: 2
#### Associative Compression Networks for Representation Learning
Alex Graves, Jacob Menick, Aaron van den Oord
This paper introduces Associative Compression Networks (ACNs), a new framework for variational autoencoding with neural networks. The system differs from existing variational autoencoders (VAEs) in that the prior distribution used to model each code is conditioned on a similar code from the dataset. In compression terms this equates to sequentially transmitting the dataset using an ordering determined by proximity in latent space. Since the prior need only account for local, rather than global variations in the latent space, the coding cost is greatly reduced, leading to rich, informative codes. Crucially, the codes remain informative when powerful, autoregressive decoders are used, which we argue is fundamentally difficult with normal VAEs. Experimental results on MNIST, CIFAR-10, ImageNet and CelebA show that ACNs discover high-level latent features such as object class, writing style, pose and facial expression, which can be used to cluster and classify the data, as well as to generate diverse and convincing samples. We conclude that ACNs are a promising new direction for representation learning: one that steps away from IID modelling, and towards learning a structured description of the dataset as a whole.
Captured tweets and retweets: 25
#### Copula Variational Bayes inference via information geometry
Viet Hung Tran
Variational Bayes (VB), also known as independent mean-field approximation, has become a popular method for Bayesian network inference in recent years. Its application is vast, e.g. in neural network, compressed sensing, clustering, etc. to name just a few. In this paper, the independence constraint in VB will be relaxed to a conditional constraint class, called copula in statistics. Since a joint probability distribution always belongs to a copula class, the novel copula VB (CVB) approximation is a generalized form of VB. Via information geometry, we will see that CVB algorithm iteratively projects the original joint distribution to a copula constraint space until it reaches a local minimum Kullback-Leibler (KL) divergence. By this way, all mean-field approximations, e.g. iterative VB, Expectation-Maximization (EM), Iterated Conditional Mode (ICM) and k-means algorithms, are special cases of CVB approximation. For a generic Bayesian network, an augmented hierarchy form of CVB will also be designed. While mean-field algorithms can only return a locally optimal approximation for a correlated network, the augmented CVB network, which is an optimally weighted average of a mixture of simpler network structures, can potentially achieve the globally optimal approximation for the first time. Via simulations of Gaussian mixture clustering, the classification's accuracy of CVB will be shown to be far superior to that of state-of-the-art VB, EM and k-means algorithms.
Captured tweets and retweets: 2
#### Pose2Seg: Human Instance Segmentation Without Detection
Ruilong Li, Xin Dong, Zixi Cai, Dingcheng Yang, Haozhi Huang, Song-Hai Zhang, Paul L. Rosin, Shi-Min Hu
The general method of image instance segmentation is to perform the object detection first, and then segment the object from the detection bounding-box. More recently, deep learning methods like Mask R-CNN perform them jointly. However, little research takes into account the uniqueness of the "1human" category, which can be well defined by the pose skeleton. In this paper, we present a brand new pose-based instance segmentation framework for humans which separates instances based on human pose, not proposal region detection. We demonstrate that our pose-based framework can achieve similar accuracy to the detection-based approach, and can moreover better handle occlusion, which is the most challenging problem in the detection-based framework.
Captured tweets and retweets: 1
#### Datasheets for Datasets
Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumeé III, Kate Crawford
Currently there is no standard way to identify how a dataset was created, and what characteristics, motivations, and potential skews it represents. To begin to address this issue, we propose the concept of a datasheet for datasets, a short document to accompany public datasets, commercial APIs, and pretrained models. The goal of this proposal is to enable better communication between dataset creators and users, and help the AI community move toward greater transparency and accountability. By analogy, in computer hardware, it has become industry standard to accompany everything from the simplest components (e.g., resistors), to the most complex microprocessor chips, with datasheets detailing standard operating characteristics, test results, recommended usage, and other information. We outline some of the questions a datasheet for datasets should answer. These questions focus on when, where, and how the training data was gathered, its recommended use cases, and, in the case of human-centric datasets, information regarding the subjects' demographics and consent as applicable. We develop prototypes of datasheets for two well-known datasets: Labeled Faces in The Wild~\cite{lfw} and the Pang \& Lee Polarity Dataset~\cite{polarity}.
Captured tweets and retweets: 1
#### Group Normalization
Yuxin Wu, Kaiming He
Batch Normalization (BN) is a milestone technique in the development of deep learning, enabling various networks to train. However, normalizing along the batch dimension introduces problems --- BN's error increases rapidly when the batch size becomes smaller, caused by inaccurate batch statistics estimation. This limits BN's usage for training larger models and transferring features to computer vision tasks including detection, segmentation, and video, which require small batches constrained by memory consumption. In this paper, we present Group Normalization (GN) as a simple alternative to BN. GN divides the channels into groups and computes within each group the mean and variance for normalization. GN's computation is independent of batch sizes, and its accuracy is stable in a wide range of batch sizes. On ResNet-50 trained in ImageNet, GN has 10.6% lower error than its BN counterpart when using a batch size of 2; when using typical batch sizes, GN is comparably good with BN and outperforms other normalization variants. Moreover, GN can be naturally transferred from pre-training to fine-tuning. GN can outperform or compete with its BN-based counterparts for object detection and segmentation in COCO, and for video classification in Kinetics, showing that GN can effectively replace the powerful BN in a variety of tasks. GN can be easily implemented by a few lines of code in modern libraries.
Captured tweets and retweets: 7
#### Generative Multi-Agent Behavioral Cloning
Eric Zhan, Stephan Zheng, Yisong Yue, Patrick Lucey
We propose and study the problem of generative multi-agent behavioral cloning, where the goal is to learn a generative multi-agent policy from pre-collected demonstration data. Building upon advances in deep generative models, we present a hierarchical policy framework that can tractably learn complex mappings from input states to distributions over multi-agent action spaces. Our framework is flexible and can incorporate high-level domain knowledge into the structure of the underlying deep graphical model. For instance, we can effectively learn low-dimensional structures, such as long-term goals and team coordination, from data. Thus, an additional benefit of our hierarchical approach is the ability to plan over multiple time scales for effective long-term planning. We showcase our approach in an application of modeling team offensive play from basketball tracking data. We show how to instantiate our framework to effectively model complex interactions between basketball players and generate realistic multi-agent trajectories of basketball gameplay over long time periods. We validate our approach using both quantitative and qualitative evaluations, including a user study comparison conducted with professional sports analysts.
Captured tweets and retweets: 2
#### Setting up a Reinforcement Learning Task with a Real-World Robot
A. Rupam Mahmood, Dmytro Korenkevych, Brent J. Komer, James Bergstra
Reinforcement learning is a promising approach to developing hard-to-engineer adaptive solutions for complex and diverse robotic tasks. However, learning with real-world robots is often unreliable and difficult, which resulted in their low adoption in reinforcement learning research. This difficulty is worsened by the lack of guidelines for setting up learning tasks with robots. In this work, we develop a learning task with a UR5 robotic arm to bring to light some key elements of a task setup and study their contributions to the challenges with robots. We find that learning performance can be highly sensitive to the setup, and thus oversights and omissions in setup details can make effective learning, reproducibility, and fair comparison hard. Our study suggests some mitigating steps to help future experimenters avoid difficulties and pitfalls. We show that highly reliable and repeatable experiments can be performed in our setup, indicating the possibility of reinforcement learning research extensively based on real-world robots.
Captured tweets and retweets: 2
#### FEVER: a large-scale dataset for Fact Extraction and VERification
James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Arpit Mittal
Unlike other tasks and despite recent interest, research in textual claim verification has been hindered by the lack of large-scale manually annotated datasets. In this paper we introduce a new publicly available dataset for verification against textual sources, FEVER: Fact Extraction and VERification. It consists of 185,441 claims generated by altering sentences extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims are classified as Supported, Refuted or NotEnoughInfo by annotators achieving 0.6841 in Fleiss $\kappa$. For the first two classes, the annotators also recorded the sentence(s) forming the necessary evidence for their judgment. To characterize the challenge of the dataset presented, we develop a pipeline approach using both baseline and state-of-the-art components and compare it to suitably designed oracles. The best accuracy we achieve on labeling a claim accompanied by the correct evidence is 31.87%, while if we ignore the evidence we achieve 50.91%. Thus we believe that FEVER is a challenging testbed that will help stimulate progress on claim verification against textual sources.
Captured tweets and retweets: 19
#### Variance Networks: When Expectation Does Not Meet Your Expectations
Kirill Neklyudov, Dmitry Molchanov, Arsenii Ashukha, Dmitry Vetrov
Ordinary stochastic neural networks mostly rely on the expected values of their weights to make predictions, whereas the induced noise is mostly used to capture the uncertainty, prevent overfitting and slightly boost the performance through test-time averaging. In this paper, we introduce variance layers, a different kind of stochastic layers. Each weight of a variance layer follows a zero-mean distribution and is only parameterized by its variance. We show that such layers can learn surprisingly well, can serve as an efficient exploration tool in reinforcement learning tasks and provide a decent defense against adversarial attacks. We also show that a number of conventional Bayesian neural networks naturally converge to such zero-mean posteriors. We observe that in these cases such zero-mean parameterization leads to a much better training objective than conventional parameterizations where the mean is being learned.
Captured tweets and retweets: 2
#### The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities
Joel Lehman, Jeff Clune, Dusan Misevic, Christoph Adami, Julie Beaulieu, Peter J. Bentley, Samuel Bernard, Guillaume Belson, David M. Bryson, Nick Cheney, Antoine Cully, Stephane Donciuex, Fred C. Dyer, Kai Olav Ellefsen, Robert Feldt, Stephan Fischer, Stephanie Forrest, Antoine Frénoy, Christian Gagneé, Leni Le Goff, Laura M. Grabowski, Babak Hodjat, Laurent Keller, Carole Knibbe, Peter Krcah, Richard E. Lenski, Hod Lipson, Robert MacCurdy, Carlos Maestre, Risto Miikkulainen, Sara Mitri, David E. Moriarty, Jean-Baptiste Mouret, Anh Nguyen, Charles Ofria, Marc Parizeau, David Parsons, Robert T. Pennock, William F. Punch, Thomas S. Ray, Marc Schoenauer, Eric Shulte, Karl Sims, Kenneth O. Stanley, François Taddei, Danesh Tarapore, Simon Thibault, Westley Weimer, Richard Watson, Jason Yosinksi
Biological evolution provides a creative fount of complex and subtle adaptations, often surprising the scientists who discover them. However, because evolution is an algorithmic process that transcends the substrate in which it occurs, evolution's creativity is not limited to nature. Indeed, many researchers in the field of digital evolution have observed their evolving algorithms and organisms subverting their intentions, exposing unrecognized bugs in their code, producing unexpected adaptations, or exhibiting outcomes uncannily convergent with ones in nature. Such stories routinely reveal creativity by evolution in these digital worlds, but they rarely fit into the standard scientific narrative. Instead they are often treated as mere obstacles to be overcome, rather than results that warrant study in their own right. The stories themselves are traded among researchers through oral tradition, but that mode of information transmission is inefficient and prone to error and outright loss. Moreover, the fact that these stories tend to be shared only among practitioners means that many natural scientists do not realize how interesting and lifelike digital organisms are and how natural their evolution can be. To our knowledge, no collection of such anecdotes has been published before. This paper is the crowd-sourced product of researchers in the fields of artificial life and evolutionary computation who have provided first-hand accounts of such cases. It thus serves as a written, fact-checked collection of scientifically important and even entertaining stories. In doing so we also present here substantial evidence that the existence and importance of evolutionary surprises extends beyond the natural world, and may indeed be a universal property of all complex evolving systems.
Captured tweets and retweets: 1
#### Accelerated Methods for Deep Reinforcement Learning
Deep reinforcement learning (RL) has achieved many recent successes, yet experiment turn-around time remains a key bottleneck in research and in practice. We investigate how to optimize existing deep RL algorithms for modern computers, specifically for a combination of CPUs and GPUs. We confirm that both policy gradient and Q-value learning algorithms can be adapted to learn using many parallel simulator instances. We further find it possible to train using batch sizes considerably larger than are standard, without negatively affecting sample complexity or final performance. We leverage these facts to build a unified framework for parallelization that dramatically hastens experiments in both classes of algorithm. All neural network computations use GPUs, accelerating both data collection and training. Our results include using an entire NVIDIA DGX-1 to learn successful strategies in Atari games in single-digit minutes, using both synchronous and asynchronous algorithms.
Captured tweets and retweets: 2
#### A Reductions Approach to Fair Classification
Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, Hanna Wallach
We present a systematic approach for achieving fairness in a binary classification setting. While we focus on two well-known quantitative definitions of fairness, our approach encompasses many other previously studied definitions as special cases. Our approach works by reducing fair classification to a sequence of cost-sensitive classification problems, whose solutions yield a randomized classifier with the lowest (empirical) error subject to the desired constraints. We introduce two reductions that work for any representation of the cost-sensitive classifier and compare favorably to prior baselines on a variety of data sets, while overcoming several of their disadvantages.
Captured tweets and retweets: 2
#### Composable Planning with Attributes
Amy Zhang, Adam Lerer, Sainbayar Sukhbaatar, Rob Fergus, Arthur Szlam
The tasks that an agent will need to solve often are not known during training. However, if the agent knows which properties of the environment are important then, after learning how its actions affect those properties, it may be able to use this knowledge to solve complex tasks without training specifically for them. Towards this end, we consider a setup in which an environment is augmented with a set of user defined attributes that parameterize the features of interest. We propose a method that learns a policy for transitioning between "nearby" sets of attributes, and maintains a graph of possible transitions. Given a task at test time that can be expressed in terms of a target set of attributes, and a current state, our model infers the attributes of the current state and searches over paths through attribute space to get a high level plan, and then uses its low level policy to execute the plan. We show in 3D block stacking, grid-world games, and StarCraft that our model is able to generalize to longer, more complex tasks at test time by composing simpler learned policies.
Captured tweets and retweets: 2
#### Towards end-to-end spoken language understanding
Dmitriy Serdyuk, Yongqiang Wang, Christian Fuegen, Anuj Kumar, Baiyang Liu, Yoshua Bengio
Spoken language understanding system is traditionally designed as a pipeline of a number of components. First, the audio signal is processed by an automatic speech recognizer for transcription or n-best hypotheses. With the recognition results, a natural language understanding system classifies the text to structured data as domain, intent and slots for down-streaming consumers, such as dialog system, hands-free applications. These components are usually developed and optimized independently. In this paper, we present our study on an end-to-end learning system for spoken language understanding. With this unified approach, we can infer the semantic meaning directly from audio features without the intermediate text representation. This study showed that the trained model can achieve reasonable good result and demonstrated that the model can capture the semantic attention directly from the audio features.
Captured tweets and retweets: 2
#### Do WaveNets Dream of Acoustic Waves?
Kanru Hua
Various sources have reported the WaveNet deep learning architecture being able to generate high-quality speech, but to our knowledge there haven't been studies on the interpretation or visualization of trained WaveNets. This study investigates the possibility that WaveNet understands speech by unsupervisedly learning an acoustically meaningful latent representation of the speech signals in its receptive field; we also attempt to interpret the mechanism by which the feature extraction is performed. Suggested by singular value decomposition and linear regression analysis on the activations and known acoustic features (e.g. F0), the key findings are (1) activations in the higher layers are highly correlated with spectral features; (2) WaveNet explicitly performs pitch extraction despite being trained to directly predict the next audio sample and (3) for the said feature analysis to take place, the latent signal representation is converted back and forth between baseband and wideband components.
Captured tweets and retweets: 2
#### The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation
Miles Brundage, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, Hyrum Anderson, Heather Roff, Gregory C. Allen, Jacob Steinhardt, Carrick Flynn, Seán Ó hÉigeartaigh, Simon Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, Rebecca Crootof, Owain Evans, Michael Page, Joanna Bryson, Roman Yampolskiy, Dario Amodei
This report surveys the landscape of potential security threats from malicious uses of AI, and proposes ways to better forecast, prevent, and mitigate these threats. After analyzing the ways in which AI may influence the threat landscape in the digital, physical, and political domains, we make four high-level recommendations for AI researchers and other stakeholders. We also suggest several promising areas for further research that could expand the portfolio of defenses, or make attacks less effective or harder to execute. Finally, we discuss, but do not conclusively resolve, the long-term equilibrium of attackers and defenders.
Captured tweets and retweets: 1
1 2 3 4 5 6 7 8 9 32 33 | {} |
## Introduction
Super-resolution optical microscopy methods have become essential tools in biology, and among these DNA-PAINT1,2,3,4,5 has proved especially versatile6,7. In DNA-PAINT, epitopes of interest are labeled with ‘docking’ DNA motifs, while dye-modified ‘imager’ oligonucleotides are introduced in solution. Transient hybridization to docking motifs immobilizes imagers for long enough to generate ‘blinks’ (events) in a camera frame, which can then be fitted to localize target epitopes with sub-diffraction resolution2. DNA-PAINT carries several advantages compared to competing approaches such as STORM8,9 and PALM10,11, eliminating the need for photo-switchable or chemically-switchable dyes and effectively circumventing photobleaching, due to fresh imagers continuously diffusing in from the bulk.
The unparalleled flexibility of DNA-PAINT comes at a cost, in the form of a number of serious drawbacks currently limiting the applicability and performance of the technology when imaging biological cells and tissues.
The presence of free imagers in solution produces a diffuse fluorescent background, which compromises event detection and localization precision. The impact of free-imager signals is particularly severe when imaging deep in biological tissues, where efficient background-rejection methods such as TIRF cannot be used.
In addition, imagers often exhibit substantial non-specific binding to biological preparations, which complicates data interpretation7 and can prevent detection of sparse targets12.
Both imager-induced background and non-specific events can be reduced by decreasing imager concentration. However, such a reduction also decreases event rates and extends image-acquisition timescales, which is often prohibitive due to limitations in mechanical and chemical sample stability.
Finally, despite it being effectively immune to photobleaching, DNA-PAINT has been shown to suffer from photo-induced inactivation of docking strands13.
Here, we introduce repeat DNA-PAINT, a straightforward strategy that mitigates all these critical limitations of DNA-PAINT.
## Results
### Repeat DNA-PAINT affords an increase in event rate
As demonstrated in Fig. 1a, c, we employ docking motifs featuring N identical Repeated Domains (Nx RD, N = 1, 3, 6, 10) complementary to imagers. Unless otherwise specified, we use a 9-nucleotide (nt) imager (P1) whose concentration is referred to as [I].
In the super-resolution imaging regime, only a small fraction of docking sites is occupied by imagers at any given time. In these conditions, and if all repeated docking domains are equally accessible to imagers as in a 1x RD motif, the spatial event density E is expected to be proportional to the product of imager concentration and repeat domain number N:
$$E = \rho _{{\mathrm{DS}}} \cdot N \cdot \frac{{[I]}}{{K_{\mathrm{d}}}},$$
(1)
where ρDS is the docking strand density (set by the density of markers in the sample) and Kd the binding affinity of imagers to a single docking domain (see also Supplementary Note 1).
In agreement with Eq. 1, tests performed on functionalized microspheres demonstrate a linear growth in event rate with increasing N, for fixed imager concentration [I] = 50 pM (Fig. 1b). The experimental findings are confirmed by molecular simulations, relying on the oxDNA14 model and the Forward–Flux Sampling method to estimate imager-docking binding rates15 (Fig. 1b).
Simulations further highlight that, as expected, imagers bind all individual domains on the repeat-docking motifs with similar probability, proving that the elongation of docking motifs does not hinder their accessibility (Supplementary Fig. 1).
Equation 1 also indicates that, when using docking motifs with N repeats, the imager concentration can be reduced N-fold while preserving the event density E, or equivalently the event rate (when summed over a region of interest and quantified per frame).
To confirm this hypothesis we constructed DNA origami test tiles that display a number of “anchor” overhangs, initially connected to 1x RD docking motifs. The former could then be displaced through a toe-holding reaction, and were replaced with a 10x RD strand (Fig. 1c). The event rate per origami tile was preserved when changing from 1x RD docking sites with 0.4 nM imager concentration to 10x RD docking sites but 10-times lower imager concentration of 40 pM (Fig. 1d). The same strategy was applied to biological samples, specifically cardiac tissues6 where we labeled ryanodine receptors (RyRs) with the common anchor strand that initially held a 1x RD motif. As expected, we find near identical event rates when imaging 1x RD with [I] = 0.4 nM versus replacing these with 10x RD with [I] = 40 pM (Supplementary Fig. 2).
### Repeat DNA-PAINT suppresses backgrounds and enhances resolution
The ability of Repeat DNA-PAINT to function optimally with a substantial (up to 10-fold) reduction in imager concentration makes it ideal for mitigating issues resulting from imagers in solution, the most direct being the fluorescent background produced by unbound imagers.
In Fig. 2 we therefore investigate the fluorescent background in cardiac tissue samples with conventional docking strands (1x RD) and repeat domains (10x RD). Visual assessment demonstrates a clear improvement in contrast between the two imaging modes, as shown by example frames in Fig. 2ai (1 RD) and Fig. 2 aii (10x RD), to an extent that substantially improves the detectability of individual binding events and their localization precision16.
For a quantitative assessment, we measured background signals produced with [I] = 40 pM and 0.4 nM in optically thick tissues labeled with common anchor overhangs, but lacking docking motifs. Figure 2b (left pair of bars), demonstrates a near linear increase of the fluorescent background with [I]. Once the markers were functionalized with docking strands, either 1x RD or 10x RD, the ratio of background levels was slightly lower, apparently due to an additional offset background (Fig. 2b, right pair of bars). We hypothesize that the additional background is generated by specific binding events occurring out of the plane of focus. These events are indeed expected to produce an out-of-focus signal proportional to the event rate, and thus similar when using 1x RD with 0.4 nM versus 10x RD with 40 pM of imager (by design).
It is expected that the substantial reduction in background afforded by Repeat DNA-PAINT translates into a significant improvement in resolution. To quantify this improvement we imaged deep (several microns) into optically thick (~20 µm) cardiac tissue using this technique. We performed a two-stage experiment as exemplified in Fig. 1c, first imaging with 1x RD at high [I] and then with 10x RD at low [I]. In both cases, we carried out Fourier Ring Correlation (FRC) measurements of the optical resolution in 2 × 2 µm2 regions across the ~24 × 20 µm2 imaging region (Fig. 2c). This yielded a mean FRC resolution measurement (Fig. 2d) of 123.7 ± 3.0 nm (SEM) for 1x RD, [I] = 0.4 nM, and 78.0 ± 1.8 nm (SEM) for 10x RD, [I] = 40 pM, confirming the substantial improvement in resolution with Repeat DNA-PAINT when background from imagers in solution cannot be effectively rejected, e.g., when imaging deep in thick tissue with widefield illumination (Fig. 2e).
### Repeat DNA-PAINT suppresses non-specific binding
Having proven the benefits of Repeat DNA-PAINT in reducing backgrounds and improving resolution, we assessed its impact on non-specific imager-binding events at unlabeled locations of biological samples. These non-specific events produce spurious blinks that are often difficult to distinguish from proximal specific signals. Expectedly, Fig. 3a shows that the rate of non-specific events, as detected in unlabeled cardiac tissue, scales linearly with [I]. Similar trends are observed for different imager sequences (Supplementary Fig. 4).
In Fig. 3b we study the time-sequence of imager-attachment events recorded in cardiac tissue, as a potential way of separating specific from suspected non-specific events. We compare a trace recorded within a likely unlabeled area, where only suspected non-specific events are observed, based on only one brief attachment phase (Fig. 3b, red region), with one measured at a location where docking strands are present and specific binding is detected (Fig. 3b, yellow region). We observe a qualitative difference between the two situations, with specific binding occurring steadily and suspected non-specific events being often localized in time1, similar to the time courses of imager attachment observed in data from unlabeled cardiac tissue, which underlies the summary data in Fig. 3a.
Although occasionally applicable, this identification strategy is only robust if specific and suspected non-specific binding sites are spatially isolated. In samples where docking strands are more densely packed and/or evenly distributed, non-specific events cannot be easily separated (Supplementary Fig. 5), introducing potential artifacts in the reconstructed images and distorting site-counting as performed, e.g., via qPAINT3.
Repeat DNA-PAINT offers a solution that avoids the complexity of identifying non-specific events, by directly reducing their occurrence to negligible levels, as demonstrated in Fig. 3c. Specifically, owing to the 10-fold reduction in imager concentration, image data collected with 10x RD on our cardiac samples only feature ~0.9% non-specific events, whereas conventional DNA-PAINT, here implemented with 1x RD docking strands, yields a ~8% non-specific contamination. We thus conclude that Repeat DNA-PAINT offers a robust route for suppressing spurious events independent of sample characteristics.
### Repeat DNA-PAINT mitigates photoinduced site damage
Despite its insensitivity to photobleaching, DNA-PAINT is subject to a progressive inactivation of docking sites, ascribed to their interaction with the free-radical states of photo-excited fluorochromes13. The domain redundancy in Repeat DNA-PAINT can greatly slow down site loss, as we demonstrate with origami test tiles nominally featuring six anchor sites (Fig. 4a). For tiles with 1x RD and 10x RD motifs, we compare the average number of sites actually detected on the tiles in the first 20 K frames of long imaging runs, to those counted in the following 20 K frames. While for 1x RD tiles we observed a ~12.1% loss of docking sites between the two experimental intervals, 10x RD tiles just lose ~2.2% (Fig. 4b, c), a 5-fold suppression. Direct examination of the histograms describing the distribution of detectable sites per tile show that with 1x RD more than 50% of the initially complete tiles lost at least one site (Fig. 4b). In turn, the vast majority of complete DNA origami tiles remained intact when using 10x RD docking strands (Fig. 4c).
### Extended docking motifs do not affect spatial resolution
A potential issue deriving from the extension of the docking strands is the loss of spatial resolution17,18, as the flexible docking-imager complexes undergo rapid thermal fluctuations during binding events (see Supplementary Note 2). We used oxDNA simulations to quantify the resulting ‘blurring’, by sampling the distance between the tethering point of the docking strand and the fluorophore location of imagers hybridized to each binding site in 1x RD, 3x RD, and 6x RD motifs. The results, summarized in Fig. 5a, demonstrate narrow fluorophore distributions for the binding sites closest to the tethering point, and broader ones for the more distal sites, peaking at ~8 nm for the furthest domain.
Although this level of broadening may appear significant compared to the resolution of DNA-PAINT in optimal conditions (~5 nm19), it has little impact on the precision with which one can localize the labeled epitope by fitting the diffraction-limited image of a blink. The effect can be quantified by convolving the fluorophore distributions (Supplementary Fig. 6 and Supplementary Note 2) with the theoretical point-spread function (PSF) of the microscope, as shown in Fig. 5b. The PSF broadening is minute and produces, at most, a 0.12% shift in the location of the first Airy minimum.
We thus do not expect that the larger physical size of multi-repeat docking motifs cause any loss of experimental resolution. We confirmed this prediction with DNA-origami test samples (Fig. 5c), showing no detectable resolution difference between 1x RD and 10x RD, both rendering spots with apparent diameter of ~13 nm (Fig. 5d and Supplementary Fig. 7). Similarly, the Fourier Ring Correlation (FRC) measure of resolution20 was essentially unaltered between 1x RD (12.2 ± 2.7 nm) and 10x RD (12.4 ± 2.7 nm) images, as shown in Fig. 5e. Note that when imaging origami test samples, the resolution is virtually unaffected by the higher imager concentration used with 1x RD and the consequent stronger free-imager background, as instead demonstrated for the case of thick biological tissues (Fig. 2). Indeed, origami represent a highly ideal scenario in which imaging can be carried out in TIRF mode, which is highly effective in rejecting out-of-focus backgrounds. Other imaging modes, necessary to investigate thicker biological samples, do not perform nearly as well, leading to the substantial benefits in terms of background and resolution associated with reducing imager concentration.
### Additional advantages of Repeat DNA-PAINT: qPAINT, enhanced imaging rate and photobleaching-free wide-field imaging
Repeat DNA-PAINT is also fully compatible with extensions of DNA-PAINT, such as qPAINT, a technique that estimates the number of available docking sites within a region of interest. We confirm the accuracy of qPAINT with origami tiles displaying five 10x RD motifs, where the technique estimates 4.93 ± 0.16 sites/tile (see Fig. 6a, and “Methods” section).
In addition, we point out that the boost in event-rate afforded by Repeat DNA-PAINT can also be exploited to increase image acquisition rate.
The key for increasing imaging frame rate is using weakly binding imagers, which thanks to a larger Kd, and associated larger off-rate, produce shorter events. In parallel, however, one would have to increase imager concentration in direct proportion to Kd, in order to retain a sufficiently high binding frequency, see also Eq. 1 and Supplementary Note 1. The concomitant increase in background (see also Fig. 2) would normally be prohibitive but the event-rate acceleration afforded by Repeat DNA-PAINT allows imaging to be carried out at “normal” imager concentrations, in the sub nanomolar range. Figure 6b indeed demonstrates that by simply replacing 1x RD with 10x RD at ‘conventional’ imager concentration ([I] = 0.3 nM), and using a shorter (low-affinity) 8 nt imager P1s, one can increase frame rate 10-fold (from 100 ms to 10 ms), and reduce the overall imaging time ~6-fold. When performing accelerated imaging, we observe a slightly lowered limiting spatial resolution, from ~80 nm at 100 ms acquisition time to ~100 nm at 10 ms, see Supplemental Fig. 7. Note however that high frame rate acquisition can be further improved by optimizing illumination conditions, so that the number of photons collected from a dye molecule in a short-exposure frame equals that achieved at longer integration time. The ability of repeated-docking motifs to accelerate imaging has been recently confirmed by Straus et al.21, which however do not discuss the associated improvements in terms of background, resolution and non-specific signals.
Finally, Repeat DNA-PAINT enables effectively photo-bleaching resistant, high-contrast, diffraction-limited imaging. In all the super-resolution applications described above, low imager concentrations are used so that only a small fraction of docking sites is occupied at any given instant. At higher imaging concentrations, a significant fraction of the sites are occupied by imagers. Since imagers are still constantly exchanged with the surrounding solution, operating under these conditions would in principle allow for photobleaching-free diffraction-limited fluorescence imaging, including wide-field and point-scanning confocal. However, to achieve a sufficient docking-site occupancy with conventional 1x RD docking strands, one would have to increase imager concentration to a point where the free-imager background massively reduces contrast. Repeat DNA-PAINT performed with 10x RD motifs solves this issue thanks to the intrinsically higher imager binding rates, which enables wide-field imaging at the imager concentrations normally used for conventional DNA-PAINT. This translates in a straightforward strategy for collecting high-contrast, photobleaching-free images of staining patterns (Supplementary Fig. 9).
## Discussion
In summary, we demonstrate that Repeat DNA-PAINT mitigates all key limitations of DNA-PAINT, namely non-specific events (10x reduction), free-imager background (~5x reduction) and photoinduced site loss (5x reduction) while also being able to accelerate data acquisition (6–10x). We also show that there is no observable impact on spatial resolution from “long” docking strands containing many repeat domains which greatly extends the design space of Repeat DNA-PAINT. Notably, the implementation of Repeat DNA-PAINT is straightforward and does not carry any known drawbacks, it is routinely applicable, consolidating the role of DNA-PAINT as one of the most robust and versatile SMLM methods.
## Methods
### Experimental methods and materials
#### DNA-PAINT oligonucleotides
Oligonucleotide sequences were designed and checked with the NUPACK web application22 (www.nupack.org). Oligonucleotides were then purchased from either Integrated DNA Technologies (IDT, Belgium) or Eurofins Genomics (Eurofins, Germany) with HPLC purification. See Supplementary Table 1 for a full list of oligonucleotide sequences used.
#### DNA origami production and sample preparation
All oligonucleotides (staples) used to construct the origami tiles were purchased from IDT with standard desalting, pre-reconstituted in Tris EDTA (10 mM Tris + 1 mM EDTA, TE) buffer (pH 8.0) at 100 µM concentration. Rothemund Rectangular Origami (RRO) with various 3′ overhangs were manufactured following standard methods2. Picasso2 was used to generate staple sequences which yield an RRO with 3′ overhangs in specified locations on a single face of the planar origami. We designed overhangs which would then hybridize to 1x RD or 10x RD docking motifs (see anchor in Supplementary Table 1). Eight DNA strands had 5′ biotin modifications on the reverse face for anchoring. RROs were prepared by mixing in TE + 12.5 mM MgCl2 the scaffold (M13mp18, New England Biolabs, USA) at a concentration of 10 nM, biotinylated staples at 10 nM, staples featuring the “anchor” 3′ overhangs at 1 µM, and all other staples at 100 nM. Assembly was enabled through thermal annealing (Techne, TC-512 thermocycler) bringing the mixture to 80 °C and cooling gradually from 60 °C to 4 °C over the course of 3 h. A full list of staple sequences can be found in Supplementary Tables 57.
Number 1.5 coverslips were submerged in acetone before being moved to isopropanol and subsequently allowed to dry. These were then attached to open-top Perspex imaging chambers as depicted in23, allowing for easy access. For origami attachment, a 1 mg ml−1 PBS solution of biotin-labeled bovine serum albumin (A8549, Sigma) was applied to the chambers for 5 min and then washed with excess PBS. This was followed by a 1 mg ml−1 solution of NeutrAvidin (31000, ThermoFisher) for a further 5 min before being washed with PBS + 10 mM MgCl (immobilization buffer, IB). DNA-origami solutions were diluted to roughly 1 nM in IB solution and incubated for 5 min on the prepared coverslips. Unbound origami tiles were washed off using excess IB buffer. 1x RD or 10x RD docking motifs were introduced at ~200 nM binding directly to the anchor overhangs on the origami tiles. The samples were then washed with a DNA-PAINT buffer (PB) of PBS containing 600 mM NaCl and pH corrected to 8.0 (adapted from ‘Buffer C’ in ref. 1).
#### Microsphere functionalization and sample preparation
Streptavidin-functionalized polystyrene particles with a diameter of 500 nm (Microparticles GmbH, Germany) were labeled with biotinylated oligonucleotides (Fig. 1a: docking motifs 1x RD, 3x RD, and 6x RD, see Supplementary Table 1) as described elsewhere24. Briefly the microspheres were dispersed in TE buffer containing 300 mM NaCl and the docking strands in 4x excess concentration as compared to the binding capacity of the beads. Unbound oligonucleotides were removed by a series of centrifugation and re-dispersion steps. These microspheres were attached via non-specific adhesion to coverslips cleaned as described above and coated by incubating them for 30 min with a 0.1 mg ml−1 solution of PLL-g-PEG (SuSoS, Duebendorf) in PBS.
#### Oligonucleotide to antibody conjugation
Anchor oligonucleotides (Supplementary Table 1) were conjugated to secondary antibodies for immunolabeling of cardiac samples. Lyophilized oligonucleotides were resuspended in PBS (pH 7.4) to 100 µM and kept at −20 °C for long term storage until required for conjugation. AffiniPure Goat Anti-Mouse secondary antibodies (affinity purified, #115-005-003, Jackson ImmunoResearch, PA) were conjugated using click-chemistry as described by Schnitzbauer et al.2 Briefly, the antibody was incubated with 10-fold molar excess DBCO-sulfo-NHS-ester (Jenabioscience, Germany) for 45 min. The reaction was quenched with 80 mM Tris-HCl (pH 8.0) for 10 min and then desalted using 7 K MWCO Zeba desalting columns (Thermo Fisher). A 10-fold molar excess of the azide modified oligonucleotide was then incubated with the DBCO-antibody mixture overnight at 4 °C. Subsequently the antibody was purified using 100 K Amicon spin columns (Sigma). The absorbance of the oligonucleotide-conjugated fluorophores (Cy3 or Cy5) was recorded with a Nanodrop spectrophotometer (Thermo Fisher Scientific, Waltham) and used to quantify the degree of labeling for each conjugation, typically achieving >1–3 oligonucleotides per antibody.
#### Biological sample preparation and labeling
Cardiac tissue (porcine) was fixed with 2% paraformaldehyde (PFA, pH 7.4, Sigma) for 1 h at 4 °C. Samples were then washed in PBS and kept in PBS containing 10% sucrose for 1 h before being moved to 20% (1 h) and finally 30% sucrose overnight. The tissue was then frozen in cryotubes floating in 2-Methylbutane cooled by liquid nitrogen for 10–15 min. Pre-cleaned number 1.5 glass coverslips were coated for 15 min using 0.05% poly-L-lysine (Sigma). Tissue cryosections with thicknesses of 5–20 µm were adhered to the coverslips and kept at −20 °C until used. For DNA-PAINT experiments, the tissues were labeled with mouse primary anti ryanodine or anti actinin antibodies, and targeted by the oligonucleotide conjugated secondary antibodies. Immunohistochemistry was performed in imaging chambers as described above by first permeabilizing the tissue with 0.1% Triton X-100 in PBS for 10 min at room temperature (RT). The samples were blocked with 1% bovine serum albumin (BSA) for 1 h in a hydration chamber. The monoclonal mouse anti-ryanodine receptor (RyR, MA3-916, Thermo Fisher) primary antibody was incubated overnight (4 °C) with the sample at 5 µg mL−1 in a PBS incubation solution buffer containing 1% BSA, 0.05% Triton X-100 and 0.05% sodium azide, alpha-actinin (A7732, Sigma) was diluted 1:200 in incubation buffer and treated in the same manner. Samples were washed in PBS 3–4 times for 10–15 min each. Secondary antibodies, previously conjugated to oligonucleotides and stored at 1 mg ml−1 were diluted 1:200 in incubation solution, added to the samples, and left for 2 h at RT. The tissue was then finally washed a further 3 times in PB.
#### Imaging setup and analysis
A modified Nikon Eclipse Ti-E inverted microscope (Nikon, Japan) with ×60 1.49NA APO oil immersion TIRF objective (Nikon, Japan) was used to acquire super-resolution data. Images were taken using an Andor Zyla 4.2 sCMOS camera (Andor, UK) using a camera integration time of 100 ms, or 10 ms for accelerated acquisition (Fig. 6b and Supplementary Fig. 8). A tunable LED-light source (CoolLED, UK) was used where possible to illuminate the widefield fluorescence and check labeling quality prior to super-resolution imaging. A 642 nm continuous wave diode laser (Omikron LuxX, Germany) was used to excite the ATTO 655 imager strands for DNA-PAINT imaging. Microspheres and DNA-origami tiles were imaged in total internal reflection fluorescence (TIRF) mode, whilst tissue samples required highly inclined and laminated optical sheet (HILO) mode. An auxiliary camera (DCC3240N, Thorlabs) was used in a feedback loop to monitor and correct for focal drift, similar to McGorty et al.25, and previously implemented in ref. 6. Red fluorescent beads with a diameter of 200 nm (F8887, ThermoFisher Scientific) were introduced to the samples prior to DNA-PAINT imaging and later used in post-analysis to correct for lateral drift.
Operation of the microscope components, image acquisition and image analysis were conducted using the Python software package PyME26 (Python Microscopy Environment), which is available at https://github.com/python-microscopy/python-microscopy. Single molecule events were detected and fitted to a 2D Gaussian model. Localization events were rendered into raster images that were saved as tagged image file format (TIFF) either by generating a jittered triangulation of events or by Gaussian rendering27.
#### DNA-PAINT experiments
A step-by-step protocol describing the procedure for conducting Repeat DNA-PAINT can be found at Protocol Exchange28. All DNA-PAINT experiments were conducted with solutions made up in PB, described above, and imaged at 10 frames/s (100 ms integration time) unless otherwise stated. Typically, the imager concentration in experiments with n-times docking motifs were diluted n-times in comparison to the concentrations used for a single docking motif on the same sample. 3′ ATTO 655 modified imagers were diluted to 0.04–0.4 nM (biological sample) and 0.2–2 nM (origami) depending on x RD present, experiment and sample in use. For experiments where 1x RD and 10x RD motifs had to be connected to anchor strands, these were added at 100 nM (biological samples) or 200 nM (origami). The azide modified anchor strand used for experiments involving biological samples was labeled with 3′ Cy5 or Cy3 fluorophore to aid with both the click-chemistry conjugation and for easily identifying a suitable location to image within the biological sample. The widefield dye was rapidly photobleached prior to DNA-PAINT imaging and therefore did not contribute to the super-resolution data. In order to switch between 1x RD and 10x RD as highlighted in Fig. 1c, the displacer strand D was introduced at ~100 nM and allowed to remove the incumbent docking motif. Washing, in order to remove excess D and D-1x RD (or D-10x RD) complexes, was conducted with the n-times lower imager concentration before subsequently adding the new n-times repeat docking motif as above. Figure 3b was rendered by jittered triangulation utilizing >40k frames for 1x RD segments.
#### Microsphere test samples: event-rate quantification
To quantify event rates in Fig. 1b microspheres decorated with 1x RD, 3x RD, or 6x RD were imaged with [I] = 50 pM collecting 5000 frames. The three populations of microspheres were imaged individually (n = 82, 88, 68 for 1x/3x/6x functionalized microspheres) in a split imaging chamber but using the same imager solution to guarantee an equal imager concentration. Event rates were calculated as mean value of the number of detected binding events per second and per individual microsphere.
#### Biological tissue: event-rate quantification
Event-rate traces in Supplementary Fig. 2 were obtained using tissue samples immuno-labeled to show the RyR with the anchor strand initially harboring 1x RD prior to being displaced and exchanged, as described above, with 10x RD. An imager concentration of 0.4 nM was used for 1x RD, while [I] = 40 pM was used for the washing stage between the removal of excess 10x RD and its imaging. The number of localized events were counted per second, by taking the sum of events collected over 10 frames (the camera integration time was set 100 ms). The entire experiment involved more than 110k frames (>3 h).
#### Biological tissue: non-specific event determination
Immunohistostained tissue with non-functionalized anchor strands only affixed to RyR, Fig. 3c, were first imaged with 40 pM P1 ATTO 655 imager (no designated complementary docking site available) and subsequently 0.4 nM in order to ascertain the level of non-specific binding. We verified that s the P1 and anchor sequences were completely non-complementary, (2) the spatial pattern that was formed by the detected non-specific events had a random appearance and bore no relationship with the specific pattern observed when docking strands were attached to anchors and (3) the temporal pattern of attachments was typical for that observed for suspected non-specific events (see also Fig. 3b). These same regions were then functionalized with 1x RD and later 10x RD docking strands and imaged again with their respective equivalent imager concentrations (1x RD [0.4 nM], 10x RD [40 pM]) as used previously. The number of events per 5 min window, repeated over a duration of 20 min, was recorded for each segment.
#### Biological tissue: background measurements
Background measurements were recorded in tissue where no imager had previously been present by measuring the mean background per 1k frames over 5k total. The intrinsic (no-imager) signal obtained was subtracted from subsequent measurements. Non-functionalized (anchor only) recordings were ascertained using the events from 5k frames for both 40 pM (n = 712) and 0.4 nM (n = 5002). When functionalized with either 1x or 10x RD the background measurement for the relative imager concentrations were obtained from events over 30k frames for each modality (n = 683k (1x RD), 537k (10x RD)).
#### Biological tissue: fourier ring correlation maps
Fourier Ring Correlation (FRC) measurements were performed using a PYME implementation available through the PYME-extra set of plugins (https://github.com/csoeller/PYME-extra/)29. After drift correction was applied the series was split into two equal blocks of events. All events were split into alternating segments containing 100 frames and these in turn were then used to generate two rendered Gaussian images which were compared using the FRC approach as described in ref. 20. Briefly, the intersection of the FRC curve with the 1/7 line was used to obtain an estimate of the FRC resolution. In order to generate the FRC map, presented in Fig. 2c, optically thick ~20 µm porcine tissue, labeled for alpha actinin, was imaged near the surface furthest from the objective with the excitation laser orientated to pass straight out of the objective lens. For 1x RD measurements the detection threshold in the PYME analysis pipeline26 (https://github.com/python-microscopy/python-microscopy) was set to 1.3. Because this threshold is signal-to-noise ratio based it was adjusted to 2.0 for 10x RD measurements in order to have equivalent foreground mean photon yields in detected events which ensured that equivalent detection settings are used. 2 × 2 µm regions of interest were individually segmented in time, utilizing 30k frames for each modality (1x/10x RD), and two Gaussian images rendered with a pixel size of 5 nm were rendered for each square. Localization precision, as shown in Fig. 2d inset, was determined by the PyME localization algorithm which estimates the localization error from the co-variance of the weighted least squares penalty function at convergence, see also30.
#### Biological tissue: accelerated sampling
For the data summarized in Fig. 6b we initially sampled anchor strands directly as per normal DNA-PAINT experimentation, using a 9 nt P5 imager (see Supplementary Table 1), [I] = ~0.3 nM and a camera integration time of 100 ms and. Following this sequence, 10x RD was introduced at 100 nM and allowed to hybridize to the anchor. Excess 10x RD was washed out with PB. The camera integration time was decreased to 10 ms and the excitation laser intensity was also increased by removing an ND0.5 filter. A shorter P1s imager strand (8 nt) was then added at [I] = ~0.3 nM, and blinking events recorded. The total number of frames acquired was 20k in the first experimental phase and 160k in the second. FRC measurements were taken from four regions across the sample at intervals of 1k or 10k frame to obtain the plot in Supplementary Fig. 8.
#### Biological tissue: widefield functionality using repeat domains
Cardiac tissue labeled for alpha actinin were first imaged, in widefield-mode, using the Cy3 dye attached to the anchor strand, Supplementary Fig. 9. Next, the anchor strands were functionalized with 10x RD motifs and imaged in widefield-mode using a nominally low P1 ATTO 655 imager concentrations of ~1 nM, illuminated with 647 nm laser excitation and imaged with 500 ms camera integration time. After acquiring widefield data the imager concentration was reduced with a series of washes in DNA-PAINT buffer and replaced with 40 pM P1 ATTO 655 imager and imaged as normal for super-resolution.
#### Origami test samples: event-rate quantification
To quantify event rates in Fig. 1d origami tiles were first functionalized and imaged with 1x RD motifs using 2 nM P1 ATTO 655 imager. After approximately 40k frames 1x RD were displaced and replaced with 10x RD and the imager concentration reduced by a factor of ten. Tiles identified as having had all sites occupied (n = 49 1x RD and n = 81 10x RD tiles) within the imaging period were used to ascertain number of events per second per tile.
#### Origami test samples: resolution measurements
Imaging resolution was assessed in origami test samples with the design in Fig. 5c, featuring a row of three point-like binding sites labeled with 1x RD or 10x RD docking domains (attached via anchor overhangs). Resolution was quantified from the intensity profiles measured across the three sites in the rendered images (Fig. 5d and Supplementary Fig. 7). Estimations of the full width at half maximum of the peaks were sampled over 30 individual sites (10 origami) for both 1x RD and 10x RD.
#### Origami test samples: FRC measurements
Single origami tiles were selected and rendered at 0.5 nm pixel size in ~210 nm2 boxes and the FRC analysis, described previously in ‘Biological tissue: Fourier ring correlation maps’, was applied to tiles from 1x RD (n = 47 tiles) data series and 10x RD (n = 80 tiles) docking motif data series, respectively.
#### Origami test samples: quantification of photoinduced site loss
Origami tiles with 6 binding sites, with 1x RD or functionalized with 10x RD, were imaged for 40 K frames. Tiles that could be identified were then constrained to the first 20 K frames (total of 442 tiles for 1x RD origami and 285 tiles for 10x RD origami). The same tiles were then inspected in an image rendered from frame numbers 20 K to 40 K and the number of detectable sites counted again. Site loss, expressed as a percentage (Fig. 4), was specified as the difference between the sites detected in the first 20 K frames and the sites detected in the second 20 K frames.
#### Origami test samples: qPAINT analysis of 6 and 5-spot tiles
To establish compatibility of qPAINT analysis with 10x RD motifs, origami tiles as shown in Fig. 6a, with 6 and 5 spots, respectively, were selected for qPAINT analysis in the python-microscopy environment. The qPAINT analysis approach essentially follows Jungmann et al.3. Event time traces obtained by analysis in the PYME software environment were used to determine dark times, i.e., time intervals between detected fluorescence events. Due to dye blinking and event detection noise (e.g., events being above detection threshold in one frame but below detection threshold in a consecutive one) there was an additional distribution of very short dark times, typically <10 frames. In a cumulative histogram we modeled this behavior as resulting in a cumulative distribution function (CDF) of the form:
$${\mathrm{CDF(t)}} = \alpha \left( {1 - e^{ - \frac{t}{{\tau _B}}}} \right) + \left( {1 - \alpha } \right)\left( {1 - e^{ - \frac{t}{{\tau _D}}}} \right),$$
(2)
where 0 < α < 1 and the fast blinking time τB was constrained to be <8 frames. The dark time τD obtained by fitting this CDF to experimental dark time distributions was used to conduct qPAINT analysis. To calculate the number of binding sites uncalibrated qPAINT indices were determined as the inverse of dark times6,31. The qPAINT indices were pooled for 6 and 5 spot containing tiles, respectively. The histogram of qPAINT indices for 6-spot tiles was fit with a Gaussian as shown in Fig. 6a. The center of the fitted Gaussian was used to obtain a qPAINT index calibration value for 6–10x RD docking motifs. The calibration was applied to all data, and the qPAINT estimate of the number of 10x RD motifs on 5-spot tiles obtained through gaussian fitting of the calibrated qPAINT histogram in Fig. 6a.
### Simulation methods
#### Spatial fluorophore distribution in binding events
Estimates of the probability distributions of fluorophore locations in Fig. 5a were acquired through molecular simulations using the coarse-grained model oxDNA15. oxDNA is top–down parametrized and describes each nucleotide as a site with 6 anisotropic interactions: excluded volume, stacking, cross-stacking, hydrogen bonding, backbone connectivity and electrostatic repulsion. Here we used the updated oxDNA2 force field with explicit electrostatics32.
The systems were simulated using Monte–Carlo (MC) sampling, and moves were proposed with the Virtual Move Monte Carlo (VMMC)33 scheme to better sample the highly correlated degrees of freedom. The maximum VMMC cluster-size was set to 12 nucleotides, with translational moves of 0.05 oxDNA units, and rotational moves of 0.22 oxDNA units. Temperature was set to 300 K. We run simulations at effective monovalent salt concentrations of 640 mM.
Separate simulations were initialized with the imager bound to each of the possible locations on docking strands 1x RD, 3x RD, and 6x RD. Large artificial biases were used to ensure that at least 7 of the 9 imaging-docking bonds were formed, so that the two strands remained bonded for the duration of the simulation. The end-nucleotide of the docking motif corresponding to its anchoring point, was confined to point with a 3D harmonic potential.
Each system was simulated in 16 replicas, for between 9 × 105 and 2.7 × 106 MC steps. The position of the fluorophore-bearing nucleotide on the imager was taken as a proxy for that of the fluorophore (which cannot be simulated in oxDNA), and its location relative to the harmonic trap anchoring the docking motif was sampled every 500 steps. The fluorophore location was then projected onto the x-y, plane to produce the 2D probability distributions in Supplementary Fig. 6, with uncertainties calculated between replicas (which however are negligible and unnoticeable in Fig. 5a). The probability distributions in Fig. 5a are obtained by radial averaging.
In Supplementary Note 2 we show that the timescales of relaxation of the imager-docking configuration into equilibrium are orders of magnitude faster than those of photon emission. One can thus assume that the physical locations from which photons are emitted are randomly drawn from the distributions of dye locations. The photon spatial distribution sampled by the microscope during each blink can therefore be estimated by convolving of the distribution of fluorophore locations with the PSF, here approximated with an Airy disk whose full width half maximum (FWHM) is 250 nm. Convolution between the PSF and fluorophore distributions is performed in 2D, and the radial cross sections are shown in Fig. 5b. This approximate PSF is justified as the FWHM of an Airy disk occurs at 0.51λ/NA ≈ 250 nm, using values of λ = 700 nm and NA = 1.45 that closely correspond to the experimental conditions in this study.
#### Evaluation of hybridization rate using forward flux sampling
We use molecular dynamics (MD) simulations performed with the oxDNA model to estimate the relative rates of hybridization of imagers to docking motifs with variable number of repeats (1x RD, 3x RD, and 6x RD) as shown in Fig. 1b. The absolute rates are not accessible, since diffusion rates in the coarse-grained representation oxDNA are not necessarily realistic.
For these simulations, the oxDNA force field is manually modified to eliminate intra-strand hydrogen bonding. Such a modification is necessary to prevent the appearance of a hairpin loop in 6x RD. Said loop is predicted not to occur by standard Nearest-Neighbor nucleic acid thermodynamics, as implemented in NUPACK34. We suspect the loop formation in oxDNA is an artifact related to identical excluded volume for purines and pyrimidines, so that duplex destabilization due to base pair mismatch is underestimated.
Our objective is to estimate the first order rate constant of imager hybridization to any binding domain of a tethered docking strand. Even with the highly coarse-grained oxDNA model, hybridizations are still rare over simulated timescales. To enhance sampling of hybridization events, we use Direct Forward Flux Sampling (FFS)35,36. FFS relies on defining a reaction coordinate onto which the state of the system can be projected. Along this coordinate one then identifies a number of intermediate system configurations between the initial and final states of interest. The rate for the system to evolve between the initial and final states can then be decomposed over the intermediate steps, which can be sampled more effectively.
Our implementation of FFS is based on that of Ouldridge et al.14. We define a reaction coordinate Q which can take all integer values between Q = −2 and Q = 4. For Q = −2, −1, 0 the reaction coordinate is defined based on to the minimum distance dmin between the imager and the docking motifs, calculated considering any of the nucleotides on either strand. This includes nucleotide pairs that are not-complementary. For Q = 1…4, the coordinate is also dependent on Nbonds, the number of nucleotide bonds between docking strand and imager. Following ref. 37 we assume that two nucleotides are bound if their energy of hydrogen bonding is more negative than 0.1 simulation units, equivalent to 2.5 kJ mol−1. Q = 4 corresponds to our target state in which all 9 imager nucleotides are hybridized to the docking strand. Conditions associated to all values of Q are summarized in Supplementary Table 2. We indicate as $${\uplambda}_i^{i + 1}$$ the non-intersecting interfaces between states with consecutive values of the reaction coordinate, where i = −2…n − 1. E.g. $${\uplambda}_0^1$$ is the interface between states with Q = 0 and those with Q = 1. Note that for the system to transition from Q = −2 to Q = 4 it is necessary that all intermediated values of the reaction coordinate are visited.
The rate of imager-docking hybridization can then be calculated as
$$r = {{\Phi }}_{ - 2 \to 0}\mathop {\prod }\nolimits_{i = 1}^4 p\left( {i{\mathrm{|}}i - 1} \right)$$
(3)
Here, Φ−2→0 is the flux from interface $${\uplambda}_{ - 2}^{ - 1}$$ to $${\uplambda}_{ - 1}^0$$, and p(i|i−1) are the probabilities that when at interface $${\uplambda}_{i - 2}^{i - 1}$$, the system crosses interface $${\uplambda}_{i - 1}^i$$ before reverting back to interface $${\uplambda}_{ - 2}^{ - 1}$$.
The flux Φ−2→0 is estimated from a simulation run as $${{\Phi }}_{ - 2 \to 0} = \frac{{N_{ - 2 \to 0}}}{{T_{{\mathrm{sampling}}}}}$$, where N−2→0 is the number of successful transitions from states with Q = −2 to states Q = 0 observed after simulating the system for Tsampling time steps. A successful transition is recorded every time the system first visits a state with Q = 0 after having occupied one with = −2. Prior to beginning to sample transitions, the system is equilibrated for 106 time steps. Note that generating Φ−2→0 at experimentally relevant (low nM) imager concentrations would be inefficient. Instead, we place one imager and one docking strand in a cubic (periodic) box of side length 42.5 nm corresponding to an effective concentration of 21.6 μM. Time spent in hydrogen bonded states is not included in Tsampling.
Subsequently, we evaluate the crossing probabilities of individual interfaces p(i|i−1). We start by randomly choosing saved trajectories at $${\uplambda}_{ - 1}^0$$ and simulating until we either reach $${\uplambda}_0^1$$ (success) or $${\uplambda}_{ - 2}^{ - 1}$$ (failure), then record the probability of success, p(1|0), as well as the instantaneous configuration on passing through $${\uplambda}_0^1$$. Then, we randomly choose from those saved trajectories at $${\uplambda}_0^1$$, and simulate until either at $${\uplambda}_1^2$$ (success) or $${\uplambda}_{ - 2}^{ - 1}$$ (failure), saving trajectories at $${\uplambda}_1^2$$, as well as the success probability p(2|1). We continue this procedure for the subsequent interfaces $${\uplambda}_2^3$$ and $${\uplambda}_3^4$$, and finally obtain the imager-docking hybridization rate in Eq. 3.
Details for the number of trials and successful transitions across each interface are summarized in Supplementary Tables 3 and 4.
The on-rates in Fig. 5a are averaged between two simulation repeats of approximately 20,000 transitions through each interface.
The relative hybridization rates of imager strands to each individual binding site on the multi-repeat docking motifs, shown in Supplementary Fig. 1, are extracted from the distribution of terminal states in FFS. Note that the terminal state Q = 4 in our reaction coordinate is defined as one in which 9 nucleotide bonds are formed between the imager and docking strand, regardless of which nucleotides are hybridized (in Supplementary Table 2). To determine which one of the binding sites is occupied in a given FFS terminal configuration we therefore analyzed the secondary structure of the terminal configurations. We defined the imager as being bound to a given domain if the majority of the docking nucleotides participating in bonding belonged to that domain. Approximately 20,000 terminal secondary structures were analyzed for the two separate simulation runs.
Concerning precise parameters needed to replicate these simulations: MD timesteps were set to 0.003 oxDNA time units (9.1 femtoseconds) with an oxDNA diffusion coefficient set to 1.25 oxDNA units. Major-minor grooving was turned off. Temperature was set to 300 K and the standard oxDNA thermostat used and set to thermalize a fraction of velocities every 51 timesteps.
### Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article. | {} |
Physics in Motion
# Conservative Forces
The idea of energy permeates much of physics and allows us to understand both qualitatively and quantitatively a diverge range of phenomena. Through the consideration of mechanical systems, we will explore the notion of energy, and what it is. The concept of energy shows up in many physical systems, and you will no doubt develop of understand and appreciation of how useful energy is.
The dynamics of many systems is governed by forces that result in a constant of motion known as the energy. Such forces are called conservative forces, and they are defined through a function referred to as the potential energy.
### Energy in One-Dimension
Consider a force acting on a single body, where the force takes the following form:
$$F = - \frac{d U(x)}{d x}$$
Where $$U(x)$$ is the potential energy. From Newton's second law, we have:
$$F = m \frac{d v}{d t} = \frac{d U}{d x}$$
By multiplying both sides by $$v = dx/dt$$ we obtain:
$$m v \frac{dv}{dt} = - \frac{dx}{dt} \frac{dU}{dx}$$
If $$U(x)$$ is not an explicit function of time, and only depend son time through the coordinate $$x(t)$$, then the chain rule of calculus allows us to simplify the right hand side:
$$m v \frac{dv}{dt} = - \frac{dU}{dt}$$
Moving the potential energy to the left hand side and nothing that $$vdv / \frac{1}{2} d v^2 \ dt$$, we find,
$$\frac{d}{dt} \left( \frac{1}{2} m v^2 + U(x) \right) = 0$$
Therefore, the energy,
$$E = \frac{1}{2} m v^2 + U(x),$$
is a constant of the motion. The part of the energy which is not he potential energy is known as the kinetic energy is typically labeled $$K$$,
$$K = \frac{1}{2} m v^2$$
Kinetic energy is the part of the energy that is assoicated with the motion of the object. Together, the kinetic and potential energies form the total energy. The total energy is the part that is conserved.
The standard unit of energy is known as the joule, and is defined as:
$$Joule = \frac{Kg \times m^2}{s^2}$$
### Energy in Three-Dimensions
In two or three dimensions, the potential energy function $$U(\vec{r})$$ defines the force:
$$\vec{F} = \left( \frac{d U(\vec{r})}{dx} , \frac{d U(\vec{r})}{dy} , \frac{d U(\vec{r})}{dz} \right)$$
Following the same steps as we did above, we find that the energy is given by:
$$E = \frac{1}{2} m ( v^2_x + v^2_y + v^2_z ) + U( \vec{r} )$$
Which is also conserved. The part that involves the velocity is again the kinetic energy of the system:
$$K = \frac{1}{2} m ( v^2_x + v^2_y + v^2_z )$$
### The Zero of the Potential Energy
The equations of motion, which determine the dynamics of the system, are insensitive to any constant shift of the potential energy since the force is the derivative of the potential. Therefore, the zero of the potential can be chosen at our convenience. However, once a choice is made, it must be respected throughout the analysis of the motion. | {} |
37 views
The number of integers that satisfy the equality $\left( x^{2} – 5x + 7 \right)^{x+1} = 1$ is
1. $2$
2. $3$
3. $5$
4. $4$
Given that, $(x^{2} – 5x + 7)^{x+1} = 1$
We know that, for $a^{b} = 1$
• If $b = 0 \Rightarrow a^{0} = 1$
• If $a = 1 \Rightarrow 1^{b} = 1$
• If $a = -1, b = \text{even} \Rightarrow (-1)^{\text{even}} = 1$
$\textsf{Case 1} : \; x+1 = 0$
$\Rightarrow \boxed{ x = – 1 \;\textsf{(This case will be accepted)} }$
$\textsf{Case 2} : \; x^{2} – 5x + 7 = 1$
$\Rightarrow x^{2} – 5x + 6 = 0$
$\Rightarrow x^{2} – 3x – 2x + 6 = 0$
$\Rightarrow x (x – 3) – 2 (x-3) = 0$
$\Rightarrow (x-2)(x-3) = 0$
$\Rightarrow \boxed{x = 2,3 \;\textsf{(This case will be accepted)} }$
$\textsf{Case 3}: \; x^{2} -5x + 7 = -1$
$\Rightarrow x^{2} – 5x + 8 = 0$
$\Rightarrow x = \dfrac{-(-5) \pm \sqrt{25-32}}{2}$
$\Rightarrow x = \dfrac{5 \pm \sqrt{-7}}{2}$
$\Rightarrow \boxed{x = \frac{5 \pm 7i}{2} \;\textsf{(This case will be rejected)}} \qquad [\because \sqrt{-1} = i ]$
Because we don’t want value of $x$ is complex number.
$\therefore$ The number of integers satisfies the equation is $3.$
Correct Answer$: \text{B}$
by
4.9k points 3 7 28
1 vote
1
1 vote | {} |
# Errors with moderncv
I'm using to make my CV with the moderncv package for the first time. I'm working with TeXnicCenter and MiKTeX 2.9 on Window 7. I've downloaded the new moderncv from CTAN ... but got into problems. I don't know how to fix them.
\documentclass[12pt,a4paper,naustrian]{moderncv}
\usepackage[T1]{fontenc}
\usepackage[latin9]{inputenc}
\usepackage{color}
\usepackage{babel}
\usepackage[scale=0.75]{geometry}
\moderncvtheme[grey]{classic}
\firstname{My} \familyname{Name}
\title{CV}
\address{Street}{somewhere\protect$0.1em] country\protect\[0.2em]} %\mobile{+1231231231323} %\email{} \begin{document} \maketitle %\section{Section} %\cvitem{something}{something} %\cvitem{something}{something} Testtext. \end{document} Edit Thanks for your responses. I have just compiled your example, but not working yet. Here you can find some lines from .log file : ! LaTeX Error: Missing \begin{document}. l.2 < !-- saved from url=(0078)http://mirrors.med.harvard.edu/ctan/macros/lat... You're in trouble here. Try typing to proceed. If that doesn't work, type X to quit. Overfull \hbox (28.55835pt too wide) in paragraph at lines 2--4 []\OT1/cmr/m/n/10.95 Overfull \hbox (22.23462pt too wide) in paragraph at lines 2--4 \OT1/cmr/m/n/10.95 from [] Overfull \hbox (447.5515pt too wide) in paragraph at lines 2--4 \OT1/cmr/m/n/10.95 url=(0078)http://mirrors.med.harvard.edu/ctan/macros/latex/c ontrib/etoolbox/etoolbox.sty [] } Thanks for your responses. I have just compiled your example, but not working yet. Here you can find some lines from .log file : {This is pdfTeX, Version 3.1415926-2.3-1.40.12 (MiKTeX 2.9) (preloaded format=pdflatex 2013.4.16) 16 APR 2013 14:34 entering extended mode **LaTeX2.tex (C:\Users\iman\Desktop\moderncv\LaTeX2.tex LaTeX2e <2011/06/27> Babel <v3.8m> and hyphenation patterns for english, afrikaans, ancientgreek, ar abic, armenian, assamese, basque, bengali, bokmal, bulgarian, catalan, coptic, croatian, czech, danish, dutch, esperanto, estonian, farsi, finnish, french, ga lician, german, german-x-2009-06-19, greek, gujarati, hindi, hungarian, iceland ic, indonesian, interlingua, irish, italian, kannada, kurmanji, lao, latin, lat vian, lithuanian, malayalam, marathi, mongolian, mongolianlmc, monogreek, ngerm an, ngerman-x-2009-06-19, nynorsk, oriya, panjabi, pinyin, polish, portuguese, romanian, russian, sanskrit, serbian, slovak, slovenian, spanish, swedish, swis sgerman, tamil, telugu, turkish, turkmen, ukenglish, ukrainian, uppersorbian, u senglishmax, welsh, loaded. (C:\Users\iman\Desktop\moderncv\moderncv.cls Document Class: moderncv 2013/02/09 v1.3.0 modern curriculum vitae and letter d ocument class ("C:\Program Files (x86)\MiKTeX 2.9\tex\latex\base\size11.clo" File: size11.clo 2007/10/19 v1.4h Standard LaTeX file (size option) ) ("C:\Program Files (x86)\MiKTeX 2.9\tex\latex\etoolbox\etoolbox.sty" ! LaTeX Error: Missing \begin{document}. See the LaTeX manual or LaTeX Companion for explanation. Type H <return> for immediate help. ... l.2 < !-- saved from url=(0078)http://mirrors.med.harvard.edu/ctan/macros/lat... You're in trouble here. Try typing <return> to proceed. If that doesn't work, type X <return> to quit. Overfull \hbox (28.55835pt too wide) in paragraph at lines 2--4 []\OT1/cmr/m/n/10.95 <!-- Overfull \hbox (25.91501pt too wide) in paragraph at lines 2--4 \OT1/cmr/m/n/10.95 saved [] Overfull \hbox (22.23462pt too wide) in paragraph at lines 2--4 \OT1/cmr/m/n/10.95 from [] Overfull \hbox (447.5515pt too wide) in paragraph at lines 2--4 \OT1/cmr/m/n/10.95 url=(0078)http://mirrors.med.harvard.edu/ctan/macros/latex/c ontrib/etoolbox/etoolbox.sty [] } - Welcome to TeX.sx! "I got into problems" is not informative at all. Please add a minimal working example (MWE) that illustrates your problem. It will be much easier for us to reproduce your situation and find out what the issue is when we see compilable code, starting with \documentclass{...} and ending with \end{document}. – Jubobs Apr 16 '13 at 11:52 You can update your own answer instead of added the code as a comment. – Svend Tveskæg Apr 16 '13 at 14:12 How exactly do you compile the example? If you're using pdflatex as the log in your answer seems to show, something is wrong with your (La)TeX installation. Try compiling this: \documentclass{article}\begin{document}Hello world!\end{document} – Xavier Apr 16 '13 at 17:04 Also, make sure you downloaded your TeX files correctly. The log l.2 < !-- saved from url=(0078)http://mirrors.med.harvard.edu/ctan/macros/lat... seems like you are feeding HTML to TeX (saved the code from a web page displaying it maybe?). – Xavier Apr 16 '13 at 17:09 From the posted log file it looks like a problem with your copy of etoolbox.sty – Andrew Swann Apr 17 '13 at 10:07 ## 2 Answers You can put code into your question and use the {} button, it looks far lore readable than in comments. You have e\protect\[0.1em] country\protect\[0.2em]} \[ starts display math mode (which never finds a matching $) You intended
e\protect\\[0.1em] country\protect\\[0.2em]}
To force a line break
\documentclass[12pt,a4paper,naustrian]{moderncv}
\usepackage[T1]{fontenc}
\usepackage[latin9]{inputenc}
\usepackage{color}
\usepackage{babel}
\usepackage[scale=0.75]{geometry}
\moderncvtheme[grey]{classic}
\firstname{My} \familyname{Name}
\title{CV}
\begin{document} \maketitle %\section{Section} %\cvitem{something}{something} %\cvitem{something}{something} Testtext.
\end{document}
-
You don't need to \protect line breaks in the address, but make sure you don't end the group early with }:
\documentclass[12pt,a4paper,naustrian]{moderncv}
\usepackage[T1]{fontenc}
\usepackage[latin9]{inputenc}
\usepackage{color}
\usepackage{babel}
\usepackage[scale=0.75]{geometry}
\moderncvtheme[grey]{classic}
\firstname{My} \familyname{Name}
\title{CV}
\mobile{+1231231231323}
\email{}
\begin{document}
\maketitle
\section{Section}
\cvitem{something}{something}
\cvitem{something}{something}
Test text.
\end{document}
- | {} |
# Drunken walk
1. Jan 27, 2005
### jessawells
i'm stuck trying to figure out this probabilities problem for my thermodynamics class. the question is:
consider an idealized drunk, restricted to walk in one dimension (eg. back and forward only). the drunk takes a step every second, and each pace is the same length. let us observe the drunk in discrete timesteps, as they walk randomly - with equal probability - back or forward.
a) suppose we have 2 non-interacting drunks who start out in the same location. What is the probability that the drunks are a distance A apart after M timesteps? (use stirling's approximation if you need to)
b)suppose instead that the 2 drunks started a distance L apart. Find the probability that the drunks meet at precisely the Mth timestep.
i know that the probability of a binary model system is given by:
P = multiplicity of system / total # of accessible states
= g (M, s) / 2^M
where g is the multiplicity and s is the spin excess (# of forward steps - # of backward steps")
= M! / [(1/2M+s)! (1/2M-s)! 2^M]
using stirling's approx. for large M, this becomes,
P (M,s) = sqrt[2/M(pi)] exp[-2s^2/M]
i'm not sure where to go from here and i'm really confused. the formula i wrote takes care of the M timesteps, but how do i factor in the distance A? how should i go about doing this question? I would appreciate any help. Thanks.
2. Jan 27, 2005 | {} |
# Finding maximum likelihood estimator
1. Mar 22, 2014
### ptolema
1. The problem statement, all variables and given/known data
The independent random variables $X_1, ..., X_n$ have the common probability density function $f(x|\alpha, \beta)=\frac{\alpha}{\beta^{\alpha}}x^{\alpha-1}$ for $0\leq x\leq \beta$. Find the maximum likelihood estimators of $\alpha$ and $\beta$.
2. Relevant equations
log likelihood (LL) = n ln(α) - nα ln(β) + (α-1) ∑(ln xi)
3. The attempt at a solution
When I take the partial derivatives of log-likelihood (LL) with respect to α and β then set them equal to zero, I get:
(1) d(LL)/dα = n/α -n ln(β) + ∑(ln xi) = 0 and
(2) d(LL)/dβ = -nα/β = 0
I am unable to solve for α and β from this point, because I get α=0 from equation (2), but this clearly does not work when you substitute α=0 into equation (1). Can someone please help me figure out what I should be doing?
Last edited: Mar 22, 2014
2. Mar 22, 2014
### Mugged
So there might be some mistakes in the way you computed the log (LL) function. The term premultiplying log(β) should probably be reworked. Hint: log(β^y) = ylog(β). But what is y? It is not αn.
3. Mar 23, 2014
### Ray Vickson
Your expression for LL is correct, but condition (2) is wrong. Your problem is
$$\max_{a,b} LL = n \ln(a) - n a \ln(b) + (a-1) \sum \ln(x_i) \\ \text{subject to } b \geq m \equiv \max(x_1, x_2, \ldots ,x_n)$$
Here, I have written $a,b$ instead of $\alpha, \beta$. The constraint on $b$ comes from your requirement $0 \leq x_i \leq b \; \forall i$. When you have a bound constraint you cannot necesssarily set the derivative to zero; in fact, what replaces (2) is:
$$\partial LL/ \partial b \leq 0, \text{ and either } \partial LL/ \partial b = 0 \text{ or } b = m$$
For more on this type of condition, see, eg.,
http://en.wikipedia.org/wiki/Karush–Kuhn–Tucker_conditions
In the notation of the above link, you want to maximize a function $f = LL$, subject to no equalities, and an inequality of the form $g \equiv m - b \leq≤ 0$. The conditions stated in the above link are that
$$\partial LL/ \partial a = \mu \partial g / \partial a \equiv 0 \\ \partial LL / \partial b = \mu \partial g \partial b \equiv - \mu$$
Here. $\mu \geq 0$ is a Lagrange multiplier associated with the inequality constraint, and the b-condition above reads as $\partial LL / \partial b \leq 0$, as I already stated. Furthermore, the so-called "complementary slackness" condition is that either $\mu = 0$ or $g = 0$, as already stated.
Note that if $a/b \geq 0$ you have already satisfied the b-condition, and if $a/b > 0$ you cannot have $\partial LL / \partial b = 0$, so you must have $b = m$
4. Mar 23, 2014
### Mugged
Ray, log((b^a)^N) = a^N*log(b) ≠ aNlog(b)?
5. Mar 23, 2014
### Ray Vickson
We have $(b^a)^2 = b^a \cdot b^a = b^{2a},$ etc.
6. Mar 23, 2014
### Mugged
Ah..ok, my bad. This problem is harder than I thought...KKT coming in a statistics problem. Thanks.
7. Mar 23, 2014
### Ray Vickson
It's not that complicated in this case. For $a,b > 0$ the function $LL(a,b)$ is strictly decreasing in $b$, so for any $a > 0$ its maximum over $b \geq m \,(m > 0)$ lies at $b = m$. You don't even need calculus to conclude this. | {} |
Enable contrast version
# Tutor profile: Joseph B.
Inactive
Joseph B.
Math and Science tutor for 2 years
Tutor Satisfaction Guarantee
## Questions
### Subject:Linear Algebra
TutorMe
Question:
Find the solutions for the given system $$x+2y=1$$ $$3x+2y+4z=7$$ $$-2x+y-2z = -1$$
Inactive
Joseph B.
We can solve this using a mixture of Gaussian elimination and substitution First we will multiply row 1 by -3 and add it to row 2 $$-3[x+2y] +[3x +2y+4z] = -3(1) + 7$$ Then multiply row 1 by 2 and add it to row 3 $$2[x+2y] + [-2x +y - 2z] = 2(1) + (-1)$$ Which gives us a new system: $$x+2y = 1$$ $$-4y +4z =4$$ $$5y -2z = 1$$ We can divide row 2 by 4 $$x+2y = 1$$ $$-y +z =1$$ $$5y -2z = 1$$ We will repeat the above process using row 2, multiplying it by 2 and 5, then adding to row 1 and row 3 respectively. After doing so our new system should look like this: $$x = -1$$ $$-y +z =1$$ $$3z = 6$$ Solving row 3 gives us $$z = 2$$, we can see that $$x=-1$$, and substituting $$z=2$$ into row 2 and solving tells us that $$y=1$$
### Subject:Trigonometry
TutorMe
Question:
Using double angle identities, prove that $$1-\sin^{2}{x} = \cos^{2}{x}$$
Inactive
Joseph B.
We will start with the left side of the equation, and work it through to equivalency. First we will use a double angle identity $$\cos(2x) = \cos^{2}(x)-\sin^{2}(x)$$ then replace the $$\sin^{2}(x)$$ $$1-[\cos^{2}(x)-\cos(x)]$$ Then using another double angle identity $$\cos(2x) = 2\cos^{2}(x) - 1$$ we make another substitution $$1 -[\cos^{2}(x) - (2\cos^{2}(x) - 1)]$$ Redistribute and combine like terms $$1-[\cos^{2}(x) -2\cos^{2}(x) +1]$$ $$1-[-\cos^{2}(x) +1]$$ $$\cos^{2}(x)$$ Thus $$1 -\sin^{2}(x) = \cos^{2}(x)$$
### Subject:Calculus
TutorMe
Question:
Find the foci, vertices, center, eccentricity, and asymptotes of the conic section: $$9x^{2} - 16y^{2} -36x -32y - 92 = 0$$ (This is a calculus III problem)
Inactive
Joseph B.
First we will want to rearrange the function in the form of: $$\frac{(x-h)^{2}}{a^{2}} + \frac{(y-k)^{2}}{b^{2}} = 1$$ To do so we reorganize the function as so: $$9x^{2} - 36x -16y^{2} - 32y = 92$$ From here we factor, then complete the square: $$9(x^{2} - 4x) -16 (y^{2} -2y) = 92$$ $$\frac{(x-2)^{2}}{9} + \frac{(y+1)^{2}}{16} = 1$$ Looking at this equation we can see that the center is at (h,k) = (2, -1) We also know that the verticies are $$a$$ distance away, symmetric about the center = (-1,-1), (5, -1) Similarly the foci are $$b$$ distance away, symmetric about the center = (-3,-1), (7,-1) Eccentricity is found using the formula $$\frac{\sqrt{a^{2}+b^{2}} }{a}$$ = $$\frac{5}{3}$$ Lastly, using $$c^{2} = a^{2} + b^{2}$$ we find that the equations of the asymptotes to be $$y = \frac{4x}{3}-\frac{11}{2}$$ and $$y = \frac{5}{3} - \frac{4x}{3}$$
## Contact tutor
Send a message explaining your
needs and Joseph will reply soon.
Contact Joseph | {} |
## What Is The Scientific Definition Of Relative Dating
### Relative dating Define Relative dating at Dictionarycom
Related Images "What Is The Scientific Definition Of Relative Dating" (499 pics): | {} |
## Integrals
Integral expression can be added using the
\int_{lower}^{upper}
command.
Note, that integral expression may seems a little different in inline and display math mode - in inline mode the integral symbol and the limits are compressed.
LaTeX code Output
Integral $\int_{a}^{b} x^2 dx$ inside text
$$\int_{a}^{b} x^2 dx$$
## Multiple integrals
To obtain double/triple/multiple integrals and cyclic integrals you must use amsmath and esint (for cyclic integrals) packages.
LaTeX code Output | {} |
# Wave Function
1. Mar 27, 2008
### eit32
For a particle in a 1-dimensional box confined by 0<x<a.
a)Construct a wave function phi(x)=psi(x,t=0) such that when an energy measurement is made on the particle in this state at t=0, the following energy values are obtained with the probabilities shown:
Energy E_n Obtained : E_1 E_3 E_5
Probability of Obtaining E_n : 0.5 0.4 0.1
b) Is this answer unique? Why or why not? Illustrate with an example.
2. Mar 27, 2008
### CompuChip
For a), calculate the eigenfunctions $\psi_n$. What does a general wave function look like? What is the probability of finding $E_n$ when measuring the energy (this will also answer b))
3. Mar 27, 2008
### eit32
I don't know how to go about finding the eigenfunctions, that's part of the problem
4. Mar 27, 2008
### CompuChip
OK, so that needs some explanation. But let's focus on the important part of the question first: suppose you have the eigenfunctions $\psi_n$, such that
$$\hat H \psi_n(x) = E_n \psi_n(x)$$
Then can you answer the rest of my questions? (just express the answers in terms of $\psi_n$)
5. Mar 27, 2008
### eit32
a general wave function is usually in the form of either an exponential or sines and cosines, and i'm not really sure about the probability
6. Mar 29, 2008
### merryjman
If I remember correctly, probability functions are simply the square of the wave functions, normalized so that the integral of the prob. func. over all space is 1.
As far as the wave function itself, maybe start with this: since you are aware that these are usually sines/cosines, what sort of sine/cosine will be zero at x=0 and x=a, since the function can't exist at the boundary?
7. Mar 29, 2008
### CompuChip
If you use Griffith's book, you'll find in equation [2.16] the general (time-independent) solution
$$\Psi(x) = \sum_n c_n \psi_n(x)$$.
Also he explains at the bottom of page 36 (in my 2nd edition) that
"As we'll see in chapter 3, what $|c_n|^2$ tells you is the probability that a measurement of the energy would yield the value $E_n$."
(If you don't have this book, go buy it; IMO it is by far the best introductory QM out there!)
Now can you see how to apply this to the problem at hand?
8. Mar 30, 2008
### droedujay
General 1-D Time independent Schrodingers equation
sqrt(2/a)*sin(n*pi*x/a) | {} |
QXNSP w lEiECGlM[p[Njz x \sno @
iQX NSPj poceiy[Wwj @ @ @ @ @ @
vL SAn}A^ij͂炩܂ SHOWA @ @
Éw oXZ^[ij CGlM[p[N l iHZj @ @
@ w oXZ^[iN`j [ @ Ό l CGlM [p[N l q Ό A [ @@@O` HZ @ @
@ @
@ @
@ @
@ @ @ @ @ @ @ @ @ c 6:45 6:47 6:57 7:01 7:05 7:10 7:22 c @ @
@ @ @ @ @ iyj^xjipڌoRj@c 7:20 7:22 7:32 7:36 7:40 7:45 8:00 8:19 @ @
@ c 7:50 8:02 8:07 8:11 8:20 8:29 8:33 8:38 8:42 8:44 8:54 8:58 9:02 9:07 9:19 c @ @
@ 12:31 12:35 12:47 12:52 12:56 13:05 13:14 13:18 13:23 13:27 13:29 13:39 13:43 13:47 13:52 14:04 c @ @
@ c 15:30 15:42 15:47 15:51 16:00 16:09 c @ @ @ @ @ @ @ @ @ @ @
@ c 16:55 17:07 17:12 17:16 17:25 17:34 17:38 17:43 17:47 17:49 17:59 18:03 18:07 18:12 18:24 c @ @
@ c 18:35 18:47 18:52 18:56 19:05 19:14 c @ @ @ @ @ @ @ @ @ @ @
oXZ^[ij l CGlM[p[N wGL @ @
@ w oXZ^[iN`j [ @ Ό q l CGlM [p[N l Ό A [ @@@O` w @ @
@ @
@ @
@ @
@ @ @ @ @ @ @ @ @ c 6:50 6:54 6:59 7:08 7:12 7:17 7:29 c @ @
yjVN^x c 6:43 6:55 7:00 7:04 7:08 7:18 7:20 c @ @ @ @ @ @ @ @ @ @
@ c 7:36 7:48 7:53 7:57 8:01 8:11 8:13 8:17 8:26 8:30 8:35 8:44 8:48 8:53 9:05 c @ @
@ c 11:50 12:02 12:07 12:11 12:15 12:25 12:27 12:31 12:40 12:44 12:49 12:58 13:02 13:07 13:19 13:21 @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ c 16:15 16:19 16:24 16:33 16:37 16:42 16:54 c @ @
@ c 16:20 16:32 16:37 16:41 16:45 16:55 16:57 17:01 17:10 17:14 17:19 17:28 17:32 17:37 17:49 17:51 @ @
@ c 18:25 18:37 18:42 18:46 18:50 19:00 19:02 c @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
NNni12/30`1/3j ~i8/13`8/15jjVN_CƂȂ܂ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
26N9Kc19ijALgIoXڐĂ܂B @ @ @ @ @ @ @ @ @ @ @ @
@
@ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ | {} |
# Polish language
Polish
polski
Pronunciation[ˈpɔlskʲi] ()
Native toPoland
Ethnicity
Native speakers
45 million
L2 speakers: 5 million+
Early forms
Sign Language System
Official status
Official language in
Recognised minority
language in
Regulated byPolish Language Council
(of the Polish Academy of Sciences)
Language codes
ISO 639-1pl
ISO 639-2pol
ISO 639-3pol – inclusive code
Individual code:
szl – Silesian
Glottologpoli1260
Linguasphere53-AAA-cc 53-AAA-b..-d(varieties: 53-AAA-cca to 53-AAA-ccu)
Majority of Polish speakers
Polish used together alongside other languages
Minority of Polish speakers
This article contains IPA phonetic symbols. Without proper rendering support, you may see question marks, boxes, or other symbols instead of Unicode characters. For an introductory guide on IPA symbols, see Help:IPA.
Polish (Polish: język polski, [ˈjɛ̃zɨk ˈpɔlskʲi] (), polszczyzna [pɔlˈʂt͡ʂɨzna] () or simply polski, [ˈpɔlskʲi] ()) is a West Slavic language of the Lechitic group written in the Latin script. It is spoken primarily in Poland and serves as the native language of the Poles. In addition to being the official language of Poland, it is also used by the Polish diaspora. There are over 50 million Polish speakers around the world. It ranks as the sixth most-spoken among languages of the European Union. Polish is subdivided into regional dialects and maintains strict T–V distinction pronouns, honorifics, and various forms of formalities when addressing individuals.
The traditional 32-letter Polish alphabet has nine additions (ą, ć, ę, ł, ń, ó, ś, ź, ż) to the letters of the basic 26-letter Latin alphabet, while removing three (x, q, v). Those three letters are at times included in an extended 35-letter alphabet, although they are not used in native words. The traditional set comprises 23 consonants and 9 written vowels, including two nasal vowels (ę, ą) defined by a reversed diacritic hook called an ogonek. Polish is a synthetic and fusional language which has seven grammatical cases. It is one of very few languages in the world possessing continuous penultimate stress (with only a few exceptions) and the only in its group having an abundance of palatal consonants. Contemporary Polish developed in the 1700s as the successor to the medieval Old Polish (10th–16th centuries) and Middle Polish (16th–18th centuries).
Among the major languages, it is most closely related to Slovak and Czech but differs in terms of pronunciation and general grammar. In addition, Polish was profoundly influenced by Latin and other Romance languages like Italian and French as well as Germanic languages (most notably German), which contributed to a large number of loanwords and similar grammatical structures. Extensive usage of nonstandard dialects has also shaped the standard language; considerable colloquialisms and expressions were directly borrowed from German or Yiddish and subsequently adopted into the vernacular of Polish which is in everyday use.
Historically, Polish was a lingua franca, important both diplomatically and academically in Central and part of Eastern Europe. Today, Polish is spoken by approximately 38 million people as their first language in Poland. It is also spoken as a second language in eastern Germany, northern Czech Republic and Slovakia, western parts of Belarus and Ukraine as well as in southeast Lithuania and Latvia. Because of the emigration from Poland during different time periods, most notably after World War II, millions of Polish speakers can also be found in countries such as Canada, Argentina, Brazil, Israel, Australia, the United Kingdom and the United States.
## History
Polish began to emerge as a distinct language around the 10th century, the process largely triggered by the establishment and development of the Polish state. Mieszko I, ruler of the Polans tribe from the Greater Poland region, united a few culturally and linguistically related tribes from the basins of the Vistula and Oder before eventually accepting baptism in 966. With Christianity, Poland also adopted the Latin alphabet, which made it possible to write down Polish, which until then had existed only as a spoken language.
The Book of Henryków is the earliest document to include a sentence written entirely in what can be interpreted as Old PolishDay, ut ia pobrusa, a ty poziwai, meaning "let me grind, and you have a rest" highlighted in red.
The precursor to modern Polish is the Old Polish language. Ultimately, Polish descends from the unattested Proto-Slavic language. Polish was a lingua franca from 1500 to 1700 in Central and parts of Eastern Europe, because of the political, cultural, scientific and military influence of the former Polish–Lithuanian Commonwealth.
The Book of Henryków (Polish: Księga henrykowska, Latin: Liber fundationis claustri Sanctae Mariae Virginis in Heinrichau), contains the earliest known sentence written in the Polish language: Day, ut ia pobrusa, a ti poziwai (in modern orthography: Daj, uć ja pobrusza, a ti pocziwaj; the corresponding sentence in modern Polish: Daj, niech ja pomielę, a ty odpoczywaj or Pozwól, że ja będę mełł, a ty odpocznij; and in English: Come, let me grind, and you take a rest), written around 1270.
The medieval recorder of this phrase, the Cistercian monk Peter of the Henryków monastery, noted that "Hoc est in polonico" ("This is in Polish").
## Geographic distribution
Poland is one of the most linguistically homogeneous European countries; nearly 97% of Poland's citizens declare Polish as their first language. Elsewhere, Poles constitute large minorities in areas which were once administered or occupied by Poland, notably in neighboring Lithuania, Belarus, and Ukraine. Polish is the most widely-used minority language in Lithuania's Vilnius County, by 26% of the population, according to the 2001 census results, as Vilnius was part of Poland from 1922 until 1939. Polish is found elsewhere in southeastern Lithuania. In Ukraine, it is most common in the western parts of Lviv and Volyn Oblasts, while in West Belarus it is used by the significant Polish minority, especially in the Brest and Grodno regions and in areas along the Lithuanian border. There are significant numbers of Polish speakers among Polish emigrants and their descendants in many other countries.
In the United States, Polish Americans number more than 11 million but most of them cannot speak Polish fluently. According to the 2000 United States Census, 667,414 Americans of age five years and over reported Polish as the language spoken at home, which is about 1.4% of people who speak languages other than English, 0.25% of the US population, and 6% of the Polish-American population. The largest concentrations of Polish speakers reported in the census (over 50%) were found in three states: Illinois (185,749), New York (111,740), and New Jersey (74,663). Enough people in these areas speak Polish that PNC Financial Services (which has a large number of branches in all of these areas) offers services available in Polish at all of their cash machines in addition to English and Spanish.
According to the 2011 census there are now over 500,000 people in England and Wales who consider Polish to be their "main" language. In Canada, there is a significant Polish Canadian population: There are 242,885 speakers of Polish according to the 2006 census, with a particular concentration in Toronto (91,810 speakers) and Montreal.
The geographical distribution of the Polish language was greatly affected by the territorial changes of Poland immediately after World War II and Polish population transfers (1944–46). Poles settled in the "Recovered Territories" in the west and north, which had previously been mostly German-speaking. Some Poles remained in the previously Polish-ruled territories in the east that were annexed by the USSR, resulting in the present-day Polish-speaking minorities in Lithuania, Belarus, and Ukraine, although many Poles were expelled or emigrated from those areas to areas within Poland's new borders. To the east of Poland, the most significant Polish minority lives in a long, narrow strip along either side of the Lithuania-Belarus border. Meanwhile, the flight and expulsion of Germans (1944–50), as well as the expulsion of Ukrainians and Operation Vistula, the 1947 forced resettlement of Ukrainian minorities to the Recovered Territories in the west of the country, contributed to the country's linguistic homogeneity.
Geographic language distribution maps of Poland from pre-WWII to present
The "Recovered Territories" (in pink) were parts of Germany, including the Free City of Danzig (Gdańsk), that became part of Poland after World War II. The territory shown in grey was lost to the Soviet Union, which expelled many Poles from the area.
Geographical distribution of the Polish language (green) and other Central and Eastern European languages and dialects. A large Polish-speaking diaspora remains in the countries located east of Poland that were once the Eastern Borderlands of the Second Polish Republic (1918–1939).
Knowledge of the Polish language within parts of Europe. Polish is not a majority language anywhere outside of Poland, though Polish minority groups are present in some neighboring countries.
## Dialects
The oldest printed text in Polish – Statuta synodalia Episcoporum Wratislaviensis printed in 1475 in Wrocław by Kasper Elyan.
The Polish alphabet contains 32 letters. Q, V and X are not used in the Polish language.
The inhabitants of different regions of Poland still speak Polish somewhat differently, although the differences between modern-day vernacular varieties and standard Polish (język ogólnopolski) appear relatively slight. Most of the middle aged and young speak vernaculars close to standard Polish, while the traditional dialects are preserved among older people in rural areas. First-language speakers of Polish have no trouble understanding each other, and non-native speakers may have difficulty recognizing the regional and social differences. The modern standard dialect, often termed as "correct Polish", is spoken or at least understood throughout the entire country.
Polish has traditionally been described as consisting of four or five main regional dialects:
• Greater Polish, spoken in the west
• Lesser Polish, spoken in the south and southeast
• Masovian, spoken throughout the central and eastern parts of the country
• Silesian, spoken in the southwest (also considered a separate language, see comment below)
Kashubian, spoken in Pomerania west of Gdańsk on the Baltic Sea, is thought of either as a fifth Polish dialect or a distinct language, depending on the criteria used. It contains a number of features not found elsewhere in Poland, e.g. nine distinct oral vowels (vs. the five of standard Polish) and (in the northern dialects) phonemic word stress, an archaic feature preserved from Common Slavic times and not found anywhere else among the West Slavic languages. However, it "lacks most of the linguistic and social determinants of language-hood".
Many linguistic sources categorize Silesian as a dialect of Polish. However, many Silesians consider themselves a separate ethnicity and have been advocating for the recognition of a Silesian language. According to the last official census in Poland in 2011, over half a million people declared Silesian as their native language. Many sociolinguists (e.g. Tomasz Kamusella, Agnieszka Pianka, Alfred F. Majewicz, Tomasz Wicherkiewicz) assume that extralinguistic criteria decide whether a lect is an independent language or a dialect: speakers of the speech variety or/and political decisions, and this is dynamic (i.e. it changes over time). Also, research organizations such as SIL International and resources for the academic field of linguistics such as Ethnologue, Linguist List and others, for example the Ministry of Administration and Digitization recognized the Silesian language. In July 2007, the Silesian language was recognized by ISO, and was attributed an ISO code of szl.
1. The distinctive dialect of the Gorals (Góralski) occurs in the mountainous area bordering the Czech Republic and Slovakia. The Gorals ("Highlanders") take great pride in their culture and the dialect. It exhibits some cultural influences from the Vlach shepherds who migrated from Wallachia (southern Romania) in the 14th–17th centuries.
2. The Poznanski dialect, spoken in Poznań and to some extent in the whole region of the former Prussian Partition (excluding Upper Silesia), with noticeable German influences.
3. In the northern and western (formerly German) regions where Poles from the territories annexed by the Soviet Union resettled after World War II, the older generation speaks a dialect of Polish characteristic of the Kresy that includes a longer pronunciation of vowels.
4. Poles living in Lithuania (particularly in the Vilnius region), in Belarus (particularly the northwest), and in the northeast of Poland continue to speak the Eastern Borderlands dialect, which sounds "slushed" (in Polish described as zaciąganie z ruska, "speaking with a Ruthenian drawl") and is easily distinguishable.
5. Some city dwellers, especially the less affluent population, had their own distinctive dialects – for example, the Warsaw dialect, still spoken by some of the population of Praga on the eastern bank of the Vistula. However, these city dialects are now mostly extinct due to assimilation with standard Polish.
6. Many Poles living in emigrant communities (for example, in the United States), whose families left Poland just after World War II, retain a number of minor features of Polish vocabulary as spoken in the first half of the 20th century that now sound archaic to contemporary visitors from Poland.
Polish linguistics has been characterized by a strong strive towards promoting prescriptive ideas of language intervention and usage uniformity, along with normatively-oriented notions of language "correctness" (unusual by Western standards).
## Phonology
Spoken Polish in a neutral informative tone
A Polish speaker, recorded in Poland
### Vowels
Polish has six oral vowels (seven oral vowels in written form), which are all monophthongs, and two nasal vowels. The oral vowels are /i/ (spelled i), /ɨ/ (spelled y and also transcribed as /ɘ/), /ɛ/ (spelled e), /a/ (spelled a), /ɔ/ (spelled o) and /u/ (spelled u and ó as separate letters). The nasal vowels are /ɛ̃/ (spelled ę) and /ɔ̃/ (spelled ą). Unlike Czech or Slovak, Polish does not retain phonemic vowel length — the letter ó, which formerly represented lengthened /ɔ/ in older forms of the language, is now vestigial and instead corresponds to /u/.
Front Central Back
Close i ɘ u
Mid ɛ ɔ
Open a
### Consonants
The Polish consonant system shows more complexity: its characteristic features include the series of affricate and palatal consonants that resulted from four Proto-Slavic palatalizations and two further palatalizations that took place in Polish. The full set of consonants, together with their most common spellings, can be presented as follows (although other phonological analyses exist):
Labial Dental/
alveolar
Retroflex (Alveolo-)
palatal
Velar
plain palatalized
Nasal m n ɲ
Plosive voiceless p t k
voiced b d ɡ ɡʲ
Affricate voiceless t͡s t͡ʂ t͡ɕ
voiced d͡z d͡ʐ d͡ʑ
Fricative voiceless f s ʂ ɕ x
voiced v z ʐ ʑ
Tap/trill r
Approximant l j w
Polish oral vowels depicted on a vowel chart. Main allophones (in black) are in broad transcription, whereas positional allophones (in red and green) are in narrow transcription. Allophones with red dots appear in palatal contexts. The central vowel [ɐ] is an unstressed allophone of /ɛ, ɔ, a/ in certain contexts
Neutralization occurs between voicedvoiceless consonant pairs in certain environments, at the end of words (where devoicing occurs) and in certain consonant clusters (where assimilation occurs). For details, see Voicing and devoicing in the article on Polish phonology.
Most Polish words are paroxytones (that is, the stress falls on the second-to-last syllable of a polysyllabic word), although there are exceptions.
### Consonant distribution
Polish permits complex consonant clusters, which historically often arose from the disappearance of yers. Polish can have word-initial and word-medial clusters of up to four consonants, whereas word-final clusters can have up to five consonants. Examples of such clusters can be found in words such as bezwzględny [bɛzˈvzɡlɛndnɨ] ('absolute' or 'heartless', 'ruthless'), źdźbło [ˈʑd͡ʑbwɔ] ('blade of grass'), [ˈfstʂɔw̃s] ('shock'), and krnąbrność [ˈkrnɔmbrnɔɕt͡ɕ] ('disobedience'). A popular Polish tongue-twister (from a verse by Jan Brzechwa) is [fʂt͡ʂɛbʐɛˈʂɨɲɛ ˈxʂɔw̃ʂt͡ʂ ˈbʐmi fˈtʂt͡ɕiɲɛ] ('In Szczebrzeszyn a beetle buzzes in the reed').
Unlike languages such as Czech, Polish does not have syllabic consonants – the nucleus of a syllable is always a vowel.
The consonant /j/ is restricted to positions adjacent to a vowel. It also cannot precede the letter y.
### Prosody
The predominant stress pattern in Polish is penultimate stress – in a word of more than one syllable, the next-to-last syllable is stressed. Alternating preceding syllables carry secondary stress, e.g. in a four-syllable word, where the primary stress is on the third syllable, there will be secondary stress on the first.
Each vowel represents one syllable, although the letter i normally does not represent a vowel when it precedes another vowel (it represents /j/, palatalization of the preceding consonant, or both depending on analysis). Also the letters u and i sometimes represent only semivowels when they follow another vowel, as in autor /ˈawtɔr/ ('author'), mostly in loanwords (so not in native nauka /naˈu.ka/ 'science, the act of learning', for example, nor in nativized Mateusz /maˈte.uʂ/ 'Matthew').
A formal-tone informative sign in Polish, with a composition of vowels and consonants and a mixture of long, medium and short syllables
Some loanwords, particularly from the classical languages, have the stress on the antepenultimate (third-from-last) syllable. For example, fizyka (/ˈfizɨka/) ('physics') is stressed on the first syllable. This may lead to a rare phenomenon of minimal pairs differing only in stress placement, for example muzyka /ˈmuzɨka/ 'music' vs. muzyka /muˈzɨka/ – genitive singular of muzyk 'musician'. When additional syllables are added to such words through inflection or suffixation, the stress normally becomes regular. For example, uniwersytet (/uɲiˈvɛrsɨtɛt/, 'university') has irregular stress on the third (or antepenultimate) syllable, but the genitive uniwersytetu (/uɲivɛrsɨˈtɛtu/) and derived adjective uniwersytecki (/uɲivɛrsɨˈtɛt͡skʲi/) have regular stress on the penultimate syllables. Loanwords generally become nativized to have penultimate stress. In psycholinguistic experiments, speakers of Polish have been demonstrated to be sensitive to the distinction between regular penultimate and exceptional antepenultimate stress.
Another class of exceptions is verbs with the conditional endings -by, -bym, -byśmy, etc. These endings are not counted in determining the position of the stress; for example, zrobiłbym ('I would do') is stressed on the first syllable, and zrobilibyśmy ('we would do') on the second. According to prescriptive authorities, the same applies to the first and second person plural past tense endings -śmy, -ście, although this rule is often ignored in colloquial speech (so zrobiliśmy 'we did' should be prescriptively stressed on the second syllable, although in practice it is commonly stressed on the third as zrobiliśmy). These irregular stress patterns are explained by the fact that these endings are detachable clitics rather than true verbal inflections: for example, instead of kogo zobaczyliście? ('whom did you see?') it is possible to say kogoście zobaczyli? – here kogo retains its usual stress (first syllable) in spite of the attachment of the clitic. Reanalysis of the endings as inflections when attached to verbs causes the different colloquial stress patterns. These stress patterns are however nowadays sanctioned as part of the colloquial norm of standard Polish.
Some common word combinations are stressed as if they were a single word. This applies in particular to many combinations of preposition plus a personal pronoun, such as do niej ('to her'), na nas ('on us'), przeze mnie ('because of me'), all stressed on the bolded syllable.
## Orthography
The Polish alphabet derives from the Latin script but includes certain additional letters formed using diacritics. The Polish alphabet was one of three major forms of Latin-based orthography developed for Western and some South Slavic languages, the others being Czech orthography and Croatian orthography, the last of these being a 19th-century invention trying to make a compromise between the first two. Kashubian uses a Polish-based system, Slovak uses a Czech-based system, and Slovene follows the Croatian one; the Sorbian languages blend the Polish and the Czech ones.
Historically, Poland's once diverse and multi-ethnic population utilized many forms of scripture to write Polish. For instance, Lipka Tatars and Muslims inhabiting the eastern parts of the former Polish–Lithuanian Commonwealth wrote Polish in the Arabic alphabet. The Cyrillic script is used to a certain extent by Polish speakers in Western Belarus, especially for religious texts.
The diacritics used in the Polish alphabet are the kreska (graphically similar to the acute accent) over the letters ć, ń, ó, ś, ź and through the letter in ł; the kropka (superior dot) over the letter ż, and the ogonek ("little tail") under the letters ą, ę. The letters q, v, x are used only in foreign words and names.
Polish orthography is largely phonemic—there is a consistent correspondence between letters (or digraphs and trigraphs) and phonemes (for exceptions see below). The letters of the alphabet and their normal phonemic values are listed in the following table.
The Jakub Wujek Bible in Polish, 1599 print. The letters á and é were subsequently abolished, but survive in Czech.
Upper
case
Lower
case
Phonemic
value(s)
Upper
case
Lower
case
Phonemic
value(s)
A a /a/ Ń ń /ɲ/
Ą ą /ɔ̃/, [ɔn], [ɔm] O o /ɔ/
B b /b/ (/p/) Ó ó /u/
C c /ts/ P p /p/
Ć ć // Q q Only loanwords
D d /d/ (/t/) R r /r/
E e /ɛ/ S s /s/
Ę ę /ɛ̃/, [ɛn], [ɛm], /ɛ/ Ś ś /ɕ/
F f /f/ T t /t/
G g /ɡ/ (/k/) U u /u/
H h /x/ (/ɣ/) V v Only loanwords
I i /i/, /j/ W w /v/ (/f/)
J j /j/ X x Only loanwords
K k /k/ Y y /ɨ/, /ɘ/
L l /l/ Z z /z/ (/s/)
Ł ł /w/, /ɫ/ Ź ź /ʑ/ (/ɕ/)
M m /m/ Ż ż /ʐ/ (/ʂ/)
N n /n/
The following digraphs and trigraphs are used:
Digraph Phonemic value(s) Digraph/trigraph
(before a vowel)
Phonemic value(s)
ch /x/ ci //
cz // dzi //
dz /dz/ (/ts/) gi /ɡʲ/
// (//) (c)hi //
// (//) ki //
rz /ʐ/ (/ʂ/) ni /ɲ/
sz /ʂ/ si /ɕ/
zi /ʑ/
Voiced consonant letters frequently come to represent voiceless sounds (as shown in the tables); this occurs at the end of words and in certain clusters, due to the neutralization mentioned in the Phonology section above. Occasionally also voiceless consonant letters can represent voiced sounds in clusters.
The spelling rule for the palatal sounds /ɕ/, /ʑ/, //, // and /ɲ/ is as follows: before the vowel i the plain letters s, z, c, dz, n are used; before other vowels the combinations si, zi, ci, dzi, ni are used; when not followed by a vowel the diacritic forms ś, ź, ć, dź, ń are used. For example, the s in siwy ("grey-haired"), the si in siarka ("sulfur") and the ś in święty ("holy") all represent the sound /ɕ/. The exceptions to the above rule are certain loanwords from Latin, Italian, French, Russian or English—where s before i is pronounced as s, e.g. sinus, sinologia, do re mi fa sol la si do, Saint-Simon i saint-simoniści, Sierioża, Siergiej, Singapur, singiel. In other loanwords the vowel i is changed to y, e.g. Syria, Sybir, synchronizacja, Syrakuzy.
The following table shows the correspondence between the sounds and spelling:
Digraphs and trigraphs are used:
Phonemic value Single letter/Digraph
(in pausa or
before a consonant)
Digraph/Trigraph
(before a vowel)
Single letter/Digraph
(before the vowel i)
// ć ci c
// dzi dz
/ɕ/ ś si s
/ʑ/ ź zi z
/ɲ/ ń ni n
Similar principles apply to //, /ɡʲ/, // and /lʲ/, except that these can only occur before vowels, so the spellings are k, g, (c)h, l before i, and ki, gi, (c)hi, li otherwise. Most Polish speakers, however, do not consider palatalization of k, g, (c)h or l as creating new sounds.
Except in the cases mentioned above, the letter i if followed by another vowel in the same word usually represents /j/, yet a palatalization of the previous consonant is always assumed.
The reverse case, where the consonant remains unpalatalized but is followed by a palatalized consonant, is written by using j instead of i: for example, zjeść, "to eat up".
The letters ą and ę, when followed by plosives and affricates, represent an oral vowel followed by a nasal consonant, rather than a nasal vowel. For example, ą in dąb ("oak") is pronounced [ɔm], and ę in tęcza ("rainbow") is pronounced [ɛn] (the nasal assimilates to the following consonant). When followed by l or ł (for example przyjęli, przyjęły), ę is pronounced as just e. When ę is at the end of the word it is often pronounced as just [ɛ].
Note that, depending on the word, the phoneme /x/ can be spelt h or ch, the phoneme /ʐ/ can be spelt ż or rz, and /u/ can be spelt u or ó. In several cases it determines the meaning, for example: może ("maybe") and morze ("sea").
In occasional words, letters that normally form a digraph are pronounced separately. For example, rz represents /rz/, not /ʐ/, in words like zamarzać ("freeze") and in the name Tarzan.
Doubled letters are usually pronounced as a single, lengthened consonant, however, some speakers might pronounce the combination as two separate sounds.
There are certain clusters where a written consonant would not be pronounced. For example, the ł in the word jabłko ("apple") might be omitted in ordinary speech, leading to the pronunciation japko.
## Grammar
Polish is a highly fusional language with relatively free word order, although the dominant arrangement is subject–verb–object (SVO). There are no articles, and subject pronouns are often dropped.
Nouns belong to one of three genders: masculine, feminine and neuter. The masculine gender is also divided into subgenders : animate vs inanimate in the singular, human vs nonhuman in the plural. There are seven cases: nominative, genitive, dative, accusative, instrumental, locative and vocative.
Adjectives agree with nouns in terms of gender, case, and number. Attributive adjectives most commonly precede the noun, although in certain cases, especially in fixed phrases (like język polski, "Polish (language)"), the noun may come first; the rule of thumb is that generic descriptive adjective normally precedes (e.g. piękny kwiat, "beautiful flower") while categorizing adjective often follows the noun (e.g. węgiel kamienny, "black coal"). Most short adjectives and their derived adverbs form comparatives and superlatives by inflection (the superlative is formed by prefixing naj- to the comparative).
Verbs are of imperfective or perfective aspect, often occurring in pairs. Imperfective verbs have a present tense, past tense, compound future tense (except for być "to be", which has a simple future będę etc., this in turn being used to form the compound future of other verbs), subjunctive/conditional (formed with the detachable particle by), imperatives, an infinitive, present participle, present gerund and past participle. Perfective verbs have a simple future tense (formed like the present tense of imperfective verbs), past tense, subjunctive/conditional, imperatives, infinitive, present gerund and past participle. Conjugated verb forms agree with their subject in terms of person, number, and (in the case of past tense and subjunctive/conditional forms) gender.
Passive-type constructions can be made using the auxiliary być or zostać ("become") with the passive participle. There is also an impersonal construction where the active verb is used (in third person singular) with no subject, but with the reflexive pronoun się present to indicate a general, unspecified subject (as in pije się wódkę "vodka is being drunk"—note that wódka appears in the accusative). A similar sentence type in the past tense uses the passive participle with the ending -o, as in widziano ludzi ("people were seen"). As in other Slavic languages, there are also subjectless sentences formed using such words as można ("it is possible") together with an infinitive.
Yes-no questions (both direct and indirect) are formed by placing the word czy at the start. Negation uses the word nie, before the verb or other item being negated; nie is still added before the verb even if the sentence also contains other negatives such as nigdy ("never") or nic ("nothing"), effectively creating a double negative.
Cardinal numbers have a complex system of inflection and agreement. Zero and cardinal numbers higher than five (except for those ending with the digit 2, 3 or 4 but not ending with 12, 13 or 14) govern the genitive case rather than the nominative or accusative. Special forms of numbers (collective numerals) are used with certain classes of noun, which include dziecko ("child") and exclusively plural nouns such as drzwi ("door").
## Borrowed words
Poland was once a multi-ethnic nation with many minorities that contributed to the Polish language.
1. Top left: cauliflower (Polish kalafior from Italian cavolfiore).
2. Top right: rope (sznur from German Schnur).
3. Bottom left: shark (rekin from French requin).
4. Bottom right: teacher (belfer (colloquial) from Yiddish בעלפֿער belfer)
Polish has, over the centuries, borrowed a number of words from other languages. When borrowing, pronunciation was adapted to Polish phonemes and spelling was altered to match Polish orthography. In addition, word endings are liberally applied to almost any word to produce verbs, nouns, adjectives, as well as adding the appropriate endings for cases of nouns, adjectives, diminutives, double-diminutives, augmentatives, etc.
Depending on the historical period, borrowing has proceeded from various languages. Notable influences have been Latin (10th–18th centuries), Czech (10th and 14th–15th centuries), Italian (16th–17th centuries), French (17th–19th centuries), German (13–15th and 18th–20th centuries), Hungarian (15th–16th centuries) and Turkish (17th century). Currently, English words are the most common imports to Polish.
The Latin language, for a very long time the only official language of the Polish state, has had a great influence on Polish. Many Polish words were direct borrowings or calques (e.g. rzeczpospolita from res publica) from Latin. Latin was known to a larger or smaller degree by most of the numerous szlachta in the 16th to 18th centuries (and it continued to be extensively taught at secondary schools until World War II). Apart from dozens of loanwords, its influence can also be seen in a number of verbatim Latin phrases in Polish literature (especially from the 19th century and earlier). During the 12th and 13th centuries, Mongolian words were brought to the Polish language during wars with the armies of Genghis Khan and his descendants, e.g. dzida (spear) and szereg (a line or row).
Words from Czech, an important influence during the 10th and 14th–15th centuries include sejm, hańba and brama.
In 1518, the Polish king Sigismund I the Old married Bona Sforza, the niece of the Holy Roman emperor Maximilian, who introduced Italian cuisine to Poland, especially vegetables. Hence, words from Italian include pomidor from "pomodoro" (tomato), kalafior from "cavolfiore" (cauliflower), and pomarańcza, a portmanteau from Italian "pomo" (pome) plus "arancio" (orange). A later word of Italian origin is autostrada (from Italian "autostrada", highway).
In the 18th century, with the rising prominence of France in Europe, French supplanted Latin as an important source of words. Some French borrowings also date from the Napoleonic era, when the Poles were enthusiastic supporters of Napoleon. Examples include ekran (from French "écran", screen), abażur ("abat-jour", lamp shade), rekin ("requin", shark), meble ("meuble", furniture), bagaż ("bagage", luggage), walizka ("valise", suitcase), fotel ("fauteuil", armchair), plaża ("plage", beach) and koszmar ("cauchemar", nightmare). Some place names have also been adapted from French, such as the Warsaw borough of Żoliborz ("joli bord" = beautiful riverside), as well as the town of Żyrardów (from the name Girard, with the Polish suffix -ów attached to refer to the founder of the town).
Common handbag in Polish is called a torba, a word directly derived from the Turkish language. Turkish loanwords are common as Poland bordered the Ottoman Empire for centuries[failed verification]
Many words were borrowed from the German language from the sizable German population in Polish cities during medieval times. German words found in the Polish language are often connected with trade, the building industry, civic rights and city life. Some words were assimilated verbatim, for example handel (trade) and dach (roof); others are pronounced similarly, but differ in writing Schnursznur (cord). As a result of being neighbors with Germany, Polish has many German expressions which have become literally translated (calques). The regional dialects of Upper Silesia and Masuria (Modern Polish East Prussia) have noticeably more German loanwords than other varieties.
The contacts with Ottoman Turkey in the 17th century brought many new words, some of them still in use, such as: jar ("yar" deep valley), szaszłyk ("şişlik" shish kebab), filiżanka ("fincan" cup), arbuz ("karpuz" watermelon), dywan ("divan" carpet), etc.
From the founding of the Kingdom of Poland in 1025 through the early years of the Polish–Lithuanian Commonwealth created in 1569, Poland was the most tolerant country of Jews in Europe. Known as the "paradise for the Jews", it became a shelter for persecuted and expelled European Jewish communities and the home to the world's largest Jewish community of the time. As a result, many Polish words come from Yiddish, spoken by the large Polish Jewish population that existed until the Holocaust. Borrowed Yiddish words include bachor (an unruly boy or child), bajzel (slang for mess), belfer (slang for teacher), ciuchy (slang for clothing), cymes (slang for very tasty food), geszeft (slang for business), kitel (slang for apron), machlojka (slang for scam), mamona (money), manele (slang for oddments), myszygene (slang for lunatic), pinda (slang for girl, pejoratively), plajta (slang for bankruptcy), rejwach (noise), szmal (slang for money), and trefny (dodgy).
The mountain dialects of the Górale in southern Poland, have quite a number of words borrowed from Hungarian (e.g. baca, gazda, juhas, hejnał) and Romanian as a result of historical contacts with Hungarian-dominated Slovakia and Wallachian herders who travelled north along the Carpathians.
Thieves' slang includes such words as kimać (to sleep) or majcher (knife) of Greek origin, considered then unknown to the outside world.
In addition, Turkish and Tatar have exerted influence upon the vocabulary of war, names of oriental costumes etc. Russian borrowings began to make their way into Polish from the second half of the 19th century on.
Polish has also received an intensive number of English loanwords, particularly after World War II. Recent loanwords come primarily from the English language, mainly those that have Latin or Greek roots, for example komputer (computer), korupcja (from 'corruption', but sense restricted to 'bribery') etc. Concatenation of parts of words (e.g. auto-moto), which is not native to Polish but common in English, for example, is also sometimes used. When borrowing English words, Polish often changes their spelling. For example, Latin suffix '-tio' corresponds to -cja. To make the word plural, -cja becomes -cje. Examples of this include inauguracja (inauguration), dewastacja (devastation), recepcja (reception), konurbacja (conurbation) and konotacje (connotations). Also, the digraph qu becomes kw (kwadrant = quadrant; kworum = quorum).
## Loanwords from Polish
There are numerous words in both Polish and Yiddish (Jewish) languages which are near-identical due to the large Jewish minority that once inhabited Poland. One example is the fishing rod, ווענטקע (ventke), borrowed directly from Polish wędka.
The Polish language has influenced others. Particular influences appear in other Slavic languages and in German — due to their proximity and shared borders. Examples of loanwords include German Grenze (border), Dutch and Afrikaans grens from Polish granica; German Peitzker from Polish piskorz (weatherfish); German Zobel, French zibeline, Swedish sobel, and English sable from Polish soból; and ogonek ("little tail") — the word describing a diacritic hook-sign added below some letters in various alphabets. The common Germanic word quartz comes from the dialectical Old Polish kwardy. "Szmata," a Polish, Slovak and Ruthenian word for "mop" or "rag", became part of Yiddish. The Polish language exerted significant lexical influence upon Ukrainian, particularly in the fields of abstract and technical terminology; for example, the Ukrainian word панство panstvo (country) is derived from Polish państwo. The Polish influence on Ukrainian is particularly marked on western Ukrainian dialects in western Ukraine, which for centuries was under Polish cultural domination.
There is a substantial number of Polish words which officially became part of Yiddish, once the main language of European Jews. These include basic items, objects or terms such as a bread bun (Polish bułka, Yiddish בולקע bulke), a fishing rod (wędka, ווענטקע ventke), an oak (dąb, דעמב demb), a meadow (łąka, לאָנקע lonke), a moustache (wąsy, וואָנצעס vontses) and a bladder (pęcherz, פּענכער penkher).
Quite a few culinary loanwords exist in German and in other languages, some of which describe distinctive features of Polish cuisine. These include German and English Quark from twaróg (a kind of fresh cheese) and German Gurke, English gherkin from ogórek (cucumber). The word pierogi (Polish dumplings) has spread internationally, as well as pączki (Polish donuts) and kiełbasa (sausage, e.g. kolbaso in Esperanto). As far as pierogi concerned, the original Polish word is already in plural (sing. pieróg, plural pierogi; stem pierog-, plural ending -i; NB. o becomes ó in a closed syllable, like here in singular), yet it is commonly used with the English plural ending -s in Canada and United States of America, pierogis, thus making it a "double plural". A similar situation happened with the Polish loanword from English czipsy ("potato chips")—from English chips being already plural in the original (chip + -s), yet it has obtained the Polish plural ending -y.
It is believed that the English word spruce was derived from Prusy, the Polish name for the region of Prussia. It became spruce because in Polish, z Prus, sounded like "spruce" in English (transl. "from Prussia") and was a generic term for commodities brought to England by Hanseatic merchants and because the tree was believed to have come from Polish Ducal Prussia. However, it can be argued that the word is actually derived from the Old French term Pruce, meaning literally Prussia.
## Literature
The manuscript of Pan Tadeusz held at Ossolineum in Wrocław. Adam Mickiewicz's signature is visible.
The Polish language started to be used in literature in the Late Middle Ages. Notable works include the Holy Cross Sermons (13th/14th century), Bogurodzica (15th century) and Master Polikarp's Dialog with Death (15th century). The most influential Renaissance-era literary figures in Poland were poet Jan Kochanowski (Laments), Mikołaj Rej and Piotr Skarga (The Lives of the Saints) who established poetic patterns that would become integral to the Polish literary language and laid foundations for the modern Polish grammar. During the Age of Enlightenment in Poland, Ignacy Krasicki, known as "the Prince of Poets", wrote the first Polish novel called The Adventures of Mr. Nicholas Wisdom as well as Fables and Parables. Another significant work form this period is The Manuscript Found in Saragossa written by Jan Potocki, a Polish nobleman, Egyptologist, linguist, and adventurer.
In the Romantic Era, the most celebrated national poets, referred to as the Three Bards, were Adam Mickiewicz (Pan Tadeusz and Dziady), Juliusz Słowacki (Balladyna) and Zygmunt Krasiński (The Undivine Comedy). Poet and dramatist Cyprian Norwid is regarded by some scholars as the "Fourth Bard". Important positivist writers include Bolesław Prus (The Doll, Pharaoh), Henryk Sienkiewicz (author of numerous historical novels the most internationally acclaimed of which is Quo Vadis), Maria Konopnicka (Rota), Eliza Orzeszkowa (Nad Niemnem), Adam Asnyk and Gabriela Zapolska (The Morality of Mrs. Dulska). The period known as Young Poland produced such renowned literary figures as Stanisław Wyspiański (The Wedding), Stefan Żeromski (Homeless People, The Spring to Come), Władysław Reymont (The Peasants) and Leopold Staff. The prominent interbellum period authors include Maria Dąbrowska (Nights and Days), Stanisław Ignacy Witkiewicz (Insatiability), Julian Tuwim, Bruno Schulz, Bolesław Leśmian, Witold Gombrowicz and Zuzanna Ginczanka.
Other notable writers and poets from Poland active during World War II and after are Zbigniew Herbert, Stanisław Lem, Zofia Nałkowska, Tadeusz Borowski, Sławomir Mrożek, Krzysztof Kamil Baczyński, Julia Hartwig, Marek Krajewski, Joanna Bator, Andrzej Sapkowski, Adam Zagajewski, Dorota Masłowska, Jerzy Pilch, Ryszard Kapuściński and Andrzej Stasiuk.
Five people writing in the Polish language have been awarded the Nobel Prize in Literature: Henryk Sienkiewicz (1905), Władysław Reymont (1924), Czesław Miłosz (1980), Wisława Szymborska (1996) and Olga Tokarczuk (2018). | {} |
La lecture en ligne est gratuite
Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres
On singular control games [Elektronische Ressource] : with applications to capital accumulation / vorgelegt von Jan-Henrik Steg
92 pages
On Singular Control Games -WithApplications to Capital AccumulationInauguraldissertation zur Erlangung des Grades eines Doktorsder Wirtschaftswissenschaften (Dr. rer. pol.) an der Fakult atfur Wirtschaften der Universitat Bielefeldvorgelegt vonDiplom-Wirtschaftsingenieur Jan-Henrik StegBielefeld, April 2010Erstgutachter ZweitgutachterProfessor Dr. Frank Riedel Professor Dr. Herbert DawidInstitut fur Mathematische Institut fur MathematischeWirtschaftsforschung (IMW) Wirtschaftsforschung (IMW)Universit at Bielefeld Universit at BielefeldGedruckt auf alterungsbest andigem Papier nach DIN-ISO 9706Contents1 Introduction 41.1 Capital accumulation . . . . . . . . . . . . . . . . . . . . . . . 61.2 Irreversible investment and singular control . . . . . . . . . . . 71.3 Strategic option exercise . . . . . . . . . . . . . . . . . . . . . 91.4 Grenadier’s model . . . . . . . . . . . . . . . . . . . . . . . . . 112 Open loop strategies 142.1 Perfect competition . . . . . . . . . . . . . . . . . . . . . . . . 152.1.1 Characterization of equilibrium . . . . . . . . . . . . . 172.1.2 Construction of investment . . . . . . . . . 202.1.3 Myopic optimal stopping . . . . . . . . . . . . . . . . . 212.2 The game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.3 Symmetric equilibrium . . . . . . . . . . . . . . . . . . . . . . 242.4 Monotone follower problems . . . . . . . . . . . . . . . . . . . 272.4.1 First order condition . . . . . . . . . . . .
Voir plus Voir moins
Vous aimerez aussi
On Singular Control Games -
With
Applications to Capital Accumulation
Inauguraldissertation zur Erlangung des Grades eines Doktors
der Wirtschaftswissenschaften (Dr. rer. pol.) an der Fakult at
fur Wirtschaften der Universitat Bielefeld
vorgelegt von
Diplom-Wirtschaftsingenieur Jan-Henrik Steg
Bielefeld, April 2010Erstgutachter Zweitgutachter
Professor Dr. Frank Riedel Professor Dr. Herbert Dawid
Institut fur Mathematische Institut fur Mathematische
Wirtschaftsforschung (IMW) Wirtschaftsforschung (IMW)
Universit at Bielefeld Universit at Bielefeld
Gedruckt auf alterungsbest andigem Papier nach DIN-ISO 9706Contents
1 Introduction 4
1.1 Capital accumulation . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Irreversible investment and singular control . . . . . . . . . . . 7
1.3 Strategic option exercise . . . . . . . . . . . . . . . . . . . . . 9
1.4 Grenadier’s model . . . . . . . . . . . . . . . . . . . . . . . . . 11
2 Open loop strategies 14
2.1 Perfect competition . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.1 Characterization of equilibrium . . . . . . . . . . . . . 17
2.1.2 Construction of investment . . . . . . . . . 20
2.1.3 Myopic optimal stopping . . . . . . . . . . . . . . . . . 21
2.2 The game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3 Symmetric equilibrium . . . . . . . . . . . . . . . . . . . . . . 24
2.4 Monotone follower problems . . . . . . . . . . . . . . . . . . . 27
2.4.1 First order condition . . . . . . . . . . . . . . . . . . . 27
2.4.2 Base capacity . . . . . . . . . . . . . . . . . . . . . . . 29
2.5 Asymmetric equilibria . . . . . . . . . . . . . . . . . . . . . . 36
2.6 Explicit solutions . . . . . . . . . . . . . . . . . . . . . . . . . 40
3 Closed loop strategies 43
3.1 The game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.2 Open loop equilibrium . . . . . . . . . . . . . . . . . . . . . . 46
3.3 Markov perfect . . . . . . . . . . . . . . . . . . . . 47
3.4 A veri cation theorem . . . . . . . . . . . . . . . . . . . . . . 50
3.4.1 Re ection strategies . . . . . . . . . . . . . . . . . . . 52
3.4.2 Veri cation theorem . . . . . . . . . . . . . . . . . . . 54
3.5 Bertrand equilibrium . . . . . . . . . . . . . . . . . . . . . . . 59
3.6 Myopic investment . . . . . . . . . . . . . . . . . . . . . . . . 66
3.6.1 The myopic investor . . . . . . . . . . . . . . . . . . . 66
3.6.2 Playing against a myopic investor . . . . . . . . . . . . 69
3.6.3 Equilibrium failure . . . . . . . . . . . . . . . . . . . . 72
23.7 Collusive equilibria . . . . . . . . . . . . . . . . . . . . . . . . 74
3.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Appendix 84
Lemma 3.11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Proof of Lemma 3.5 . . . . . . . . . . . . . . . . . . . . . . . . 85
Proof of Theorem 2.15 . . . . . . . . . . . . . . . . . . . . . . 86
Bibliography 88
3Chapter 1
Introduction
The aim of this work is to establish a mathematically precise framework for
studying games of capital accumulation under uncertainty. Such games arise
as a natural extension from di erent perspectives that all lead to singular
control exercised by the agents, which induces some essential formalization
problems.
Capital accumulation as a game in continuous time originates from the
work of Spence [33], where rms make dynamic investment decisions to ex-
pand their production capacities irreversibly. Spence analyses the strategic
e ect of capital commitment, but in a deterministic world. We add uncer-
tainty to the model | as he suggests | to account for an important further
aspect of investment. Uncertain returns induce a reluctance to invest and
thus allow to abolish the arti cial bound on investment rates, resulting in
singular control.
In a rather general formulation, this intention has only been achieved be-
fore for the limiting case of perfect competition, where an individual rm’s
action does not in uence other players’ payo s and decisions, see [6]. The
perfectly competitive equilibrium is linked via a social planner to the other
extreme, monopoly, which bene ts similarly from the lack of interaction.
There is considerable work on the single agent’s problem of sequential irre-
versible investment, see e.g. [12, 30, 31], and all instances involve singular
control. In our game, the number of players is nite and actions have a
strategic e ect, so this is the second line of research we extend.
With irreversible investment, the rm’s opportunity to freely choose the
time of investment is a perpetual real option. It is intuitive that the value of
the option is strongly a ected when competitors can in uence the value of
the underlying by their actions. The classical option value of waiting [15, 29]
is threatened under competition and the need arises to model option exercise
games.
4While typical formulations [23, 28] assume xed investment sizes and pose
only the question how to schedule a single action, we determine investment
sizes endogenously. Our framework is also the limiting case for repeated
investment opportunities of arbitrarily small size. Since investment is allowed
to take the form of singular control, its rate need not be de ned even where
it occurs continuously.
An early instance of such a game is the model by Grenadier [22]. It
received much attention because it connects the mentioned di erent lines
of research, but it became also clear that one has to be very careful with
the formulation of strategies. As Back and Paulsen [4] show, it is exactly
the singular nature of investment which poses the di culties. They explain
that Grenadier’s results hold only for open loop strategies, which are invest-
ment plans merely contingent on exogenous shocks. Even to specify sensible
feedback strategies poses severe conceptual problems.
We also begin with open loop strategies, which condition investment only
on the information concerning exogenous uncertainty. Technically, this is
the multi-agent version of the sequential irreversible investment problem,
since determining a best reply to open loop strategies in a rather general
formulation is a monotone follower problem. The main new mathematical
problem is then consistency in equilibrium. We show that it su ces to focus
on the instantaneous strategic properties of capital to obtain quite concise
statements about equilibrium existence and characteristics, without a need
to specify the model or the underlying uncertainty in detail. Nevertheless,
the scope for strategic interaction is rather limited when modelling open loop
strategies.
With our subsequent account of closed loop strategies, we enter com-
pletely new terrain. While formulating the game with open loop strategies is
a quite clear extension of monopoly, we now have to propose classes of strate-
gies that can be handled, and conceive of an appropriate (subgame perfect)
equilibrium de nition. To achieve this, we can borrow only very little from
the di erential games literature.
After establishing the formal framework in a rst e ort, we encounter
new control problems in equilibrium determination. Since the methods used
for open loop strategies are not applicable, we take a dynamic programming
approach and develop a suitable veri cation theorem. It is applied to con-
struct di erent classes of Markov perfect equilibria for the Grenadier model
[22] to study the e ect of preemption on the value of the option to delay
investment. In fact, there are Markov perfect equilibria with positive option
values despite perfect circumstances for preemption.
51.1 Capital accumulation
Capital accumulation games have become classical instances of di erential
1games since the work by Spence [33]. In these games , rms typically compete
on some output good market in continuous time and obtain instantaneous
equilibrium pro ts depending on the rms’ current capital stocks, which act
as strategic substitutes. The rms can control their investment rates at any
time to adjust their capital stocks.
By irreversibility, undertaken investment has commitment power and we
can observe the e ect of preemption. However, as Spence elaborated, this
depends on the type of strategies that rms are presumed to use. The issue
is discussed in the now common terminology by Fudenberg and Tirole [21],
who take up his model.
If rms commit themselves at the beginning of the game to investment
paths such that the rates are functions of time only, one speaks of open loop
strategies. In this case, the originally dynamic game becomes in fact static
in the sense that there is a single instance of decision making and there are
no reactions during the implementation of the chosen investment plans. In
equilibrium, the rms build up capital levels that are | as a steady state |
mutual best replies.
However, if one rm can reach its open loop equilibrium capital level
earlier than the opponent, it may be advantageous to keep investing further
ahead. Then, the lagging rm has to adapt to the larger rm’s capital stock
and its best reply may be to stop before reaching the open loop equilibrium
target, resulting in an improvement for the quicker rm. The laggard cannot
credibly threaten to expand more than the best reply to the larger opponent’s
capital level in order to induce the latter to invest less in the rst place. So,
we observe preemption with asymmetric payo s.
Commitments like to an open loop investment pro le should only be
allowed if they are a clear choice in the model setup. Whenever a revision of
the investment policy is deemed possible, an optimal continuation of the game
from that point on should be required in equilibrium. Strategies involving
commitment in general do not form such subgame perfect equilibria. To
model dynamic decision making, at least state-dependent strategies have to
2be considered, termed closed loop or feedback strategies .
In capital accumulation games, the natural (minimal) state to condition
instantaneous investment decisions on are the current capital levels. They
comprise all in uence of past actions on current and future payo s. Closed
2This terminology is adapted from control theory.
6loop strategies of this type are called Markovian strategies, and with a prop-
erly de ned state, subgame perfect equilibria in these strategies persist also
with richer strategy spaces.
In order to observe any dynamic interaction and preemption in the deter-
ministic model, one has to impose an upper bound on the investment rates.
Since the optimal Markovian strategies are typically \bang-bang" (i.e., when-
ever there is an incentive to invest, it should occur at the maximally feasible
rate), an unlimited rate would result in immediate jumps, terminating all
dynamics in the model. The ability to expand faster is a strategic advan-
tage by the commitment e ect and no new investment incentives arise in the
game.
Introducing uncertainty adds a fundamental aspect to investment, foster-
ing endogenous reluctance and more dynamic decisions. With stochastically
evolving returns, it is generally not optimal to invest up to capital levels
that imply a mutual lock-in for the rest of time. Although investment may
occur in nitely fast, the rms prefer a stepwise expansion under uncertainty,
because the option to wait is valuable with irreversible investment.
1.2 Irreversible investment and
singular control
The value of the option to wait is an important factor in the problem of
sequential irreversible investment under uncertainty (e.g. [1, 30]). When the
rm can arbitrarily divide investments, it owns de facto a family of real
options on installing marginal capital units. The exercise of these options
depends on the gradual revelation of information regarding the uncertain re-
turns, analogously to single real options. It is valuable to reduce the probabil-
ity of low returns by investing only when the net present value is su ciently
positive.
The relation between implementing a monotone capital process with un-
restricted investment rate but conditional on dynamic information about
exogenous uncertainty and timing the exercise of growth options based on
the same information is in mathematical terms that between singular control
and optimal stopping.
For all degrees of competition discussed in the literature | monopoly,
perfect competition [27], and oligopoly [5, 22] | optimal investment takes
the form of singular control. This means that investment occurs only at
singular events, though usually not in lumps but nevertheless at unde ned
rates.
7Typically only initial investment is a lump. In most models, subsequent
investment is triggered by the output good price reaching a critical thresh-
old and the additional output dynamically prevents the price from exceeding
this boundary. This happens in a minimal way so that the control paths
needed for the \re ection" are continuous. While the location of the re-
ection boundary incorporates positive option premia for the monopolist,
it coincides with the zero net present value threshold in the case of perfect
competition, which eliminates any positive (expected) pro ts derived from
delaying investment. The results for oligopoly depend on the strategy types,
see Section 1.4 below.
The relation between singular control and optimal stopping holds at a
quite abstract level, which permits to study irreversible investment more
generally than for continuous Markov processes and also in absence of ex-
plicit solutions, see [31] for monopoly and [6] regarding perfect competition.
Such a general approach in fact turns out particularly bene cial for studying
oligopoly.
Here, the presence of opponent capital processes increases the complexity
of the optimization problems and consistency in equilibrium is another issue.
Consequently, one has to be very careful to transfer popular option valuation
methods or otherwise acknowledged principles on the one hand, while the
chance to obtain closed form solutions shrinks correspondingly on the other
hand.
The singular control problems of the monopolist and of the social planner
introduced for equilibrium determination under perfect competition are of
the monotone follower type. For these control problems there exists a quite
general theory built on their connection to optimal stopping, see [7, 19].
This theory facilitates part of our study of oligopoly, too. It is a quite
straightforward extension of the polar cases to formalize a general game
of irreversible investment with a nite number of players using open loop
strategies. In this case, the individual optimization problems are of the
monotone follower type as well. The main new problem becomes to ensure
consistency in equilibrium.
A crucial facet for us is the characterization of optimal controls by a rst
order condition in terms of discounted marginal revenue, used by Bertola
[12] and introduced to the general theory of singular control by Bank and
Riedel [10, 7]. Note that given some investment plan, it is feasible to schedule
tional expected pro t from marginal investment at any stopping time cannot
be positive. Contrarily, at any stopping time such that capital increases by
optimal investment, marginal pro t cannot be negative since reducing the
corresponding investment is feasible.
8Based on this intuitive characterization, which is actually su cient for
optimal investment, we show that equilibrium determination can be reduced
to solving a single monotone follower problem. However, the nal step re-
quires some work on the utilized methods, to which we dedicate a separate
discourse.
The actual equilibrium capital processes are derived in terms of a signal
process by tracking the running supremum of the latter. Riedel and Su
call the signal \base capacity"[31], because it is the minimal capital level
that a rm would ever want. Using the base capacity as investment signal
corresponds to the mentioned price threshold to trigger investment insofar as
adding capacity is always pro table for current levels below the base capacity
(resp. when the current output price exceeds the trigger price), but never
when the capital stock exceeds the base capacity (resp. when the output
price is below the threshold). Tracking the | unique | base capacity is the
optimal policy for any starting state or time, similar to a stationary trigger
price for a Markovian price process.
Under certain conditions, the signal process can be obtained as the solu-
tion to a particular backward equation, where existence is guaranteed by a
corresponding stochastic representation theorem (for a detailed presentation,
see [8], for further applications [7, 9]).
When the necessary condition for this method is violated, which is typical
for oligopoly, one can still resort to the related optimal control approach via
stopping time problems. Here, the optimal times to install each marginal
capital unit are determined independently, like exercising a real option. The
right criterion therefor is the opportunity cost of waiting.
These optimal stopping (resp. option exercise) problems form a family
which allows a uni ed treatment by monotonicity and continuity. Indeed, at
each point in time, there exists a maximal capital level for which the option
to delay (marginal) investment is worthless. This is exactly the base capacity
described above and the same corresponding investment rule is optimal.
As a consequence, irreversible investment is optimal not when the net
present value of the additional investment is greater or equal zero, but when
the opportunity cost of delaying the investment is greater or equal zero.
1.3 Strategic option exercise
The incentives of delaying investment due to dynamic uncertainty on the one
hand and of strategic preemption on the other hand contradict each other.
Therefore, when the considered real option is not exclusive, it is necessary to
study games of option exercise. The usual setting in the existing literature
9 | {} |
# parallax occlusion mapping in a deferred renderer
This topic is 2136 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hi,
I'm trying to implement parallax occlusion mapping as in:
http://developer.amd...ketch-print.pdf
However the texture coordinates that are the results of the POM calculation seem to be wrong... see the screenshot.
I followed the POM sample that can be found in DX SDK 2010 June in the DX 9 section. To add I'm doing the whole thing in view space instead of world space that the sample uses. Tangent space is only used to calculate the modified texture coordinates.
here's the G-buffer filling vertex shader with POM calculation:
#version 410 uniform mat4 m4_p, m4_mv; //projection and modelview matrices uniform mat3 m3_n; //normal matrix uniform vec3 v3_view_pos; //view space camera position uniform float height_map_scale; //height map scaling value = 0.1 in vec4 v4_vertex; //vertex attribute in vec3 v3_normal; in vec3 v3_tangent; in vec2 v2_texture; out cross_shader_data { vec2 v2_texture_coords; vec4 position; mat3 tbn; //tangent to view space matrix vec3 vs_normal; vec3 vs_view_dir; vec2 ts_pom_offset; //tangent space POM offset } vertex_output; void main() { vec3 normal = m3_n * v3_normal; //transformthe normal attribute to view space vertex_output.vs_normal = normal; //store it unnormalized normal = normalize(normal); vec3 tangent = normalize(m3_n * v3_tangent); vec3 bitangent = cross(normal, tangent); vertex_output.tbn = mat3( tangent, bitangent, normal ); //tangent space to view space matrix (needed for storing the normal map) mat3 vs_to_ts = mat3(tangent.x, bitangent.x, normal.x, tangent.y, bitangent.y, normal.y, tangent.z, bitangent.z, normal.z); //view space to tangent space matrix vertex_output.v2_texture_coords = v2_texture; vertex_output.position = m4_mv * v4_vertex; //view space position //tangent space pom offset calculation vertex_output.vs_view_dir = v3_view_pos - vertex_output.position.xyz; vec3 ts_view_dir = vs_to_ts * vertex_output.vs_view_dir; //initial parallax offset displacement direction vec2 pom_direction = normalize(ts_view_dir.xy); float view_dir_length = length(ts_view_dir); //determines the furthest amount of displacement float pom_length = sqrt(view_dir_length * view_dir_length - ts_view_dir.z * ts_view_dir.z) / ts_view_dir.z; //actual reverse parallax displacement vector vertex_output.ts_pom_offset = pom_direction * pom_length * height_map_scale; gl_Position = m4_p * vertex_output.position; }
#version 410 uniform sampler2D texture0; //albedo texture uniform sampler2D texture1; //normal map uniform sampler2D texture2; //height map uniform float far; uniform int max_samples; // = 130 uniform int min_samples; // = 8 uniform int lod_threshold; // = 4 in cross_shader_data { vec2 v2_texture_coords; vec4 position; mat3 tbn; vec3 vs_normal; vec3 vs_view_dir; vec2 ts_pom_offset; } pixel_input; out vec4 v4_albedo; out vec4 v4_normal; out vec4 v4_depth; vec2 encode_normals_spheremap(vec3 n) { vec2 enc = (normalize(n.xy) * sqrt(-n.z * 0.5 + 0.5)) * 0.5 + 0.5; return enc; } void main() { vec3 vs_normal = normalize(pixel_input.vs_normal); //normalize the vectors after interpolation vec3 vs_view_dir = normalize(pixel_input.vs_view_dir); vec2 texture_dims = textureSize(texture0, 0); //get texture size (512, 512) //POM //current gradients vec2 tex_coords_per_size = pixel_input.v2_texture_coords * texture_dims; vec2 dx_size, dy_size, dx, dy; vec4 v4_ddx, v4_ddy; //in the sample the HLSL ddx, and ddy functions were used. Is dFdx and dFdy the same in GLSL? v4_ddx = dFdx( vec4( tex_coords_per_size, pixel_input.v2_texture_coords ) ); //calculate 4 derivatives in one calculation v4_ddy = dFdy( vec4( tex_coords_per_size, pixel_input.v2_texture_coords ) ); dx_size = v4_ddx.xy; dy_size = v4_ddy.xy; dx = v4_ddx.zw; dy = v4_ddy.zw; //mip level, mip level integer portion, fractional amount of blending between levels float mip_level, mip_level_int, mip_level_frac, min_tex_coord_delta; vec2 tex_coords; //find min of change in u and v across a quad --> compute du and dv magnitude across a quad tex_coords = dx_size * dx_size + dy_size * dy_size; //standard mipmapping min_tex_coord_delta = max( tex_coords.x, tex_coords.y ); //compute current mip level, 0.5 * log2(x) is basically sqrt(x) mip_level = max( 0.5 * log2( min_tex_coord_delta ), 0 ); //start the current sample at the input texture coordinates vec2 tex_sample = pixel_input.v2_texture_coords; if( mip_level <= float(lod_threshold) ) { //this changes the number of samples per ray depending on view angle int num_steps = int(mix(max_samples, min_samples, dot( vs_view_dir, vs_normal ) ) ); float current_height = 0.0; float step_size = 1.0 / float(num_steps); float prev_height = 1.0; float next_height = 0.0; int step_index = 0; bool condition = true; vec2 tex_offset_per_step = step_size * pixel_input.ts_pom_offset; vec2 tex_current_offset = pixel_input.v2_texture_coords; float current_bound = 1.0; float pom_amount = 0.0; vec2 pt1 = vec2(0.0, 0.0); vec2 pt2 = vec2(0.0, 0.0); vec2 tex_offset = vec2(0.0, 0.0); while(step_index < num_steps) { tex_current_offset -= tex_offset_per_step; current_height = textureGrad( texture2, tex_current_offset, dx, dy ).x; //sample height map current_bound -= step_size; if(current_height > current_bound) { pt1 = vec2( current_bound, current_height ); pt2 = vec2( current_bound + step_size, prev_height ); tex_offset = tex_current_offset - tex_offset_per_step; step_index = num_steps + 1; } else { step_index++; } prev_height = current_height; } float delta1 = pt1.x - pt1.y; float delta2 = pt2.x - pt2.y; float denominator = delta2 - delta1; if(denominator == 0.0) //check for divide by zero { pom_amount = 0.0; } else { pom_amount = (pt1.x * delta2 - pt2.x * delta1) / denominator; } vec2 pom_offset = pixel_input.ts_pom_offset * (1.0 - pom_amount); tex_sample = pixel_input.v2_texture_coords - pom_offset; if(mip_level > float(lod_threshold - 1.0)) //if we're too far, then only use bump mapping { mip_level_frac = modf(mip_level, mip_level_int); //mix to generate a seamless transition tex_sample = mix(tex_sample, pixel_input.v2_texture_coords, mip_level_frac); } //shadows here } v4_albedo = texture(texture0, tex_sample); //sample the input albedo and other textures and store them in the g-buffer for lighting later on v4_normal.xy = encode_normals_spheremap(pixel_input.tbn * (texture(texture1, tex_sample).xyz * 2.0 - 1.0)); v4_depth.x = pixel_input.position.z / -far; }
after these g-buffer fills a simple blinn phong lighting calculation is applied. The result rather looks distorted than the POM
EDIT: forgot to include screenshot, now its included.
best regards,
Yours3!f
##### Share on other sites
ok so I tried to find the cause, starting with the vertex shader. I converted the original sample to viewspace, but still in DX though.
original DX sample vertex shader (converted to view space, except for tangent space calculations):
VS_OUTPUT RenderSceneVS( float4 inPositionOS : POSITION, float2 inTexCoord : TEXCOORD0, float3 vInNormalOS : NORMAL, float3 vInBinormalOS : BINORMAL, float3 vInTangentOS : TANGENT ) { VS_OUTPUT Out; // Transform and output input position Out.position = mul( inPositionOS, g_mWorldViewProjection ); // Propagate texture coordinate through: Out.texCoord = inTexCoord * g_fBaseTextureRepeat; float4x4 worldview = mul(g_mWorld, g_mView); // Transform the normal, tangent and binormal vectors from object space to homogeneous projection space: float3 vNormalWS = mul( vInNormalOS, (float3x3) worldview ); float3 vTangentWS = mul( vInTangentOS, (float3x3) worldview ); float3 vBinormalWS = mul( vInBinormalOS, (float3x3) worldview ); // Propagate the world space vertex normal through: Out.vNormalWS = vNormalWS; vNormalWS = normalize( vNormalWS ); vTangentWS = normalize( vTangentWS ); vBinormalWS = normalize( vBinormalWS ); // Compute position in world space: float4 vPositionWS = mul( inPositionOS, worldview ); float4 eye = mul(g_vEye, g_mView); // Compute and output the world view vector (unnormalized): float3 vViewWS = eye - vPositionWS; Out.vViewWS = vViewWS; // Compute denormalized light vector in world space: float3 vLightWS = mul(g_LightDir, g_mView); // Normalize the light and view vectors and transform it to the tangent space: float3x3 mWorldToTangent = float3x3( vTangentWS, vBinormalWS, vNormalWS ); // Propagate the view and the light vectors (in tangent space): Out.vLightTS = mul( vLightWS, mWorldToTangent ); Out.vViewTS = mul( mWorldToTangent, vViewWS ); // Compute the ray direction for intersecting the height field profile with // current view ray. See the above paper for derivation of this computation. // Compute initial parallax displacement direction: float2 vParallaxDirection = normalize( Out.vViewTS.xy ); // The length of this vector determines the furthest amount of displacement: float fLength = length( Out.vViewTS ); float fParallaxLength = sqrt( fLength * fLength - Out.vViewTS.z * Out.vViewTS.z ) / Out.vViewTS.z; // Compute the actual reverse parallax displacement vector: Out.vParallaxOffsetTS = vParallaxDirection * fParallaxLength; // Need to scale the amount of displacement to account for different height ranges // in height maps. This is controlled by an artist-editable parameter: Out.vParallaxOffsetTS *= g_fHeightMapScale; //Out.vParallaxOffsetTS = vViewWS.xy; return Out; }
the sample still worked so using view space shouldn't be a problem. Next I tried to debug the app by displaying different values from the vertex shader. Because the view-space calculations were correct I went on to check the tangent space calculations, and I found a strange thing:
// Normalize the light and view vectors and transform it to the tangent space: float3x3 mWorldToTangent = float3x3( vTangentWS, vBinormalWS, vNormalWS ); // Propagate the view and the light vectors (in tangent space): Out.vLightTS = mul( vLightWS, mWorldToTangent ); Out.vViewTS = mul( mWorldToTangent, vViewWS );
So in this part first a view-space to tangent space matrix is constructed (ignore the variable name i.e. wolrdtotangent), and this matrix is used to transform the light direction and view direction vectors to tangent space. Now this wouldn't be a problem, but the way the sample does it is rather strange. I calculates the tangent space light vector using row major calculations, however in the next line the order is changed, and from maths I know that in this order there is no such operation. Now I looked up the "mul" operation from msdn (http://msdn.microsoft.com/en-us/library/windows/desktop/bb509628%28v=vs.85%29.aspx) and it turns out that if mul is used like this, then the vector is considered a column vector. Now if the vector is a column vector then the matrix should be a column one as well, otherwise this operation doesn't exist. So this line:
Out.vViewTS = mul( mWorldToTangent, vViewWS );
is equal to this, right?
Out.vViewTS = mul( vViewWS, transpose(mWorldToTangent) );
But thats strange. This matrix now isn't a view space to tangent space matrix. But then what is it? Could someone please explain this to me?
##### Share on other sites
ok, so I noticed that the same sample can be located among the rendermonkey samples. So I checked it out, and found that it is way easier to understand. So I tried to implement the effect in OGL in rendermonkey and I almost got it right, the texture coordinates now look fine from above, but if I rotate the camera they become distorted.
Here are the files:
Any idea what am I missing?
##### Share on other sites
So I tried to see what could be the problem, and I noticed an interesting option when you right-click on a model (mesh). You can choose whether rendermonkey interpret the input geometry as being in either left or right handed coordinate system. After changing it to right handed the disc model looked fine, but when I turned it upside down the other side of it became distorted. I tried it with the original DX sample but this issue didn't occur there (with left handed coordinate system). So I went on to try out other meshes, since there are various opengl examples and they work with these. So I found that there are meshes that work correctly, and there are ones that don't. So I thought the issue was the input mesh, but then when I returned to my app to implement the same technique I ran again into this distortion issue. But I'm using a blender-generated cube as a model, so what it should be in right handed coordinates...
To add the DX sample uses world space, but it doesn't seem to transform any attributes to world space, so that indicates that the models are already in world space.
But hey then the algorithm only works for world space models, or what?
So how could it be generalized to use an object-space input model and transform it to whatever space one likes?
EDIT: furthermore if the sample claims to have its attributes in world space, then why does it multiply the position with a modelviewprojection matrix?
and changing the modelviewprojection matrix to a viewprojection doesnt change anything...
##### Share on other sites
so I finally solved it. As it turned out the shaders weren't the problem, but the assets, and the settings.
so I used the two textures that were used in the rendermonkey sample.
rgb albedo + rgb normals, height in alpha channel
I created a monkey in blender. I added UVs to it (edit mode, left panel, unwrap->reset), set normals to smooth and exported it into obj format.
I used 0.04 as the height map scale value, 8 as the minimum number of samples and 128 for the maximum number of samples.
Here are the rendermonkey projects + the textures and the monkey:
EDIT: when porting to my engine, I bumped into a lot of weirdness. After a lot of shader modifications I thought lets go back to the basics, and check if the normals and the tangents look right. The normals did but the tangents didnt, so I went back to the tangent vector calculation. I used some algorithm that I found somewhere on the internet, but it didn't work. So I spent a few hours searching for another algorithm, because I was too lazy to come up with one. Then it popped into my mind that I implemented some helper functions when I developed libmymath. So I looked at them and found calculate_tangent_basis(). How silly is that? ok I may say as an excuse that I developed the maths library almost a year ago (and I didnt touch it since last october, because it worked correctly...)
well you can be sure that the shaders are right. To add if you're interested in the tangent calculation just look at the link in my signature. | {} |
2022-01-28
Evaluation of ${\int }_{0}^{\frac{\pi }{2}}\frac{1}{{\left(a{\mathrm{cos}}^{2}\left(x\right)+b{\mathrm{sin}}^{2}\left(x\right)\right)}^{n}}dx$
$n=1,2,3,\dots$
I thought about using but did not work.
Micah May
Expert
Hint:Use Feynman’s Trick: differentiate the integral with respect to the parameters a and b, and it can be shown that:
$\frac{\partial {I}_{n}}{\partial a}+\frac{\partial {I}_{n}}{\partial b}=-n{I}_{n+1}$
This recursion can be re-written alternatively as:
${I}_{n}=-\frac{1}{n-1}\left(\frac{\partial {I}_{n-1}}{\partial a}+\frac{\partial {I}_{n-1}}{\partial b}\right),\phantom{\rule{1em}{0ex}}n=2,3,\dots$
and notice that ${I}_{1}$ can be evaluated rather easily using $u=\mathrm{tan}\left(x\right)$ to get ${I}_{1}=\frac{\pi }{2\sqrt{ab}}$
Do you have a similar question? | {} |
# Stoichiometric Defects
Defects in Stoichiometric Solids
It is generally known that all the compounds follow law of definite proportion. But there are certain solid compounds which, refuse to obey this universal law. Such solid compounds which actually do not possess the exact compositions according to the electronic considerations have been given the name Berthollide or non-stoichiometric compounds. For example, in certain oxides such as $TiO_{1.7- 1.8}, WO_{2.88- 2.92}, Fe_{0.95}O$ Metallic hydrides as , $CeH_{2.69}, VH_{0.56}$ ,sulphides as $CuFeS_{1.94} , Cu_{1.7} S , Cu_{1.65} , Te, Cu_{1.6} Se$; tungsten bronzes as $Na_x WO_8$ , etc., there is between the atoms and hence they are termed non-stoichiometric compounds.
It is clear that such an unbalance of composition is only possible if the structure is in some way irregular that it possesses defects. Non-stoichiometry means there is an excess either of metal or of non-metal atoms.
In a compound $XY_n$,the general concept that each atom is on an appropriate lattice point, and every lattice point tenanted by the right kind of atom, is an idealization of the real crystal which at absolute zero of temperature represents the equilibrium state. Now, at this absolute temperature in a real crystal, the thermal vibrations of the atoms facilitate the occurrence of lattice defects. These defects in crystal lattice amount to variations from the regularity which characterizes the material as a whole. The defects are of two types.
1. Frenkel Defect
This defect generally arises when an ion occupies an interstitial position between lattice points.
Here positive ions occupy interstitial positions being smaller than negative ions. In the figure given above, it is clear that one of the positive ions occupies a position in interstitial space rather than at its own appropriate site in the lattice due to which a ‘hole’ is created in the lattice as shown in the figure.
The defect mostly appears in those compounds where positive and negative ions differ largely in their radii and coordination number is low.
2. Schottky Defect
This is a defect which mainly arises if some of the lattice points are unoccupied. Such points which are unoccupied have been given the name lattice vacancies or ‘holes’. The figure exhibits Schottky defect of crystals when existence of two holes, one due to a missing positive ion and the other due to missing negative ion in crystal lattice is there.
It is also found that this defect is generally observed in strong ionic compounds having a high coordination number and the radius ratio r / R is not far below unity. Examples are cesium chloride and sodium chloride. Although both types of defects probably characterize crystals of non-stoichiometric compounds, the Schottky defects are more important.
Rees has introduced some symbolism for the constitution of imperfect crystals. He gave an idea that a lattice site appropriate presented. The nature of atom and type of site are specified for an occupied lattice; in this way an atom of type x on its proper lattice site is represented by $x / o_x$. The symbol $x_{1- A} /o_x$ represents the fraction 1 – A(A < 1) the x lattice sites is occupied by correct species of atom. Now the fraction A remains vacant unless some other species of atom is specified as also located on x lattice sites.
Generally, interstitial positions are represented by $\Delta$ Hence interstitial positions occupied by a particular species of atom may then be symbolized by $x / \Delta$ With the help of this symbolism it is possible to know the concentration and nature of lattice defects in any system. The reactions by which lattice defects are formed can be represented by quasi chemical equations.
Lattice vacancies (holes) occur in almost all types of ionic solids. However Schottky defect appears more often than Frenkel defect. The reason is that the energy needed to form a Schottky defect is much less than that needed to form a Frankel defect.
3. Substitutional Defects
Interchange of atoms between lattice sites produces this type of defect.
Type Example Nature of disorder Frenkel defect AgBr Interstitial atoms and vacancies of same kind Schottky defect KCL Vacancies in anion and Cation lattices Substitutional disorder CuAu Interchange of atoms between lattice sites
Consider a polar compound of the formula AB. Now, this can incorporate excess of metal B due to the following reasons:
(i) By having more vacant A sites than vacant B sites;
(ii) By having greater concentration of interstitial A atoms less than the concentration of vacant A sites;
(iii) By having greater concentration of interstitial B atoms than of vacant B sites.
The equation $\Delta G = \Delta H- T \Delta S$
where $\Delta G$ = free energy change
$\Delta H$ =enthalpy change
T = temperature in Kelvin
$\Delta S$ =entropy change
suggests that the production of lattice defects of Frenkel or Schottky is an endothermic process. The calculations lead to give positive value of $\Delta S$. Therefore it is clear that the defects create certain randomness into the crystal and which thereby increases the entropy.
A peculiar phenomenon has been noticed in case of ionic compounds. It is found that substances of high melting points such as CaO, KCl, the value of equilibrium constant is very small at ordinary temperature. In such case also the concentration of defects becomes noticeable as melting point. At$700^0, C, SrCl_2$ has been found to have 0.1%. Frenkel defect.
Non-stoichiometry in solids is a very general phenomenon, indeed stoichiometry is exceptional and it has been shown thermodynamically that a condensed phase even at equilibrium is not of unique composition except at its singular points and at a temperature near absolute zero. In crystalline substances which have been prepared at high temperatures, it is not unusual for an abnormally high concentration of lattice defects to be retained on cooling.
The nature of lattice defects which gives non-stoichiometric character can be ascertained by comparing the calculated density of solid compounds from X-ray measurements with observed density. Take the case of FeS which has range from $FeS_{1.00} \text{to} FeS_{1.14}$. It is considered that this range is due to interstitial sulphur atoms. While according to Hagg and Sucksdorf, there is deficiency of iron therefore the average formula is $Fe_{1.00} m\text{to} Fe_{0.88}S$ and there should be diminution in density as sulphur percentage increases.
In general, the tolerance of a crystal lattice for defects increases markedly at elevated temperatures and the stable range of compositions of non-stoichiometric phases usually become progressively broader at high temperatures.
The compound titanium monoxide exhibits the structure like that of sodium chloride. The difference between measured and calculated density indicates a high concentration of Schottky defects. The concentration of vacant cation sites and vacant anion sites gives an idea of wide variation in Ti : O ratio:
Composition Tio0.69 Tio1.00 Tio1.12 Tio1.25 Tio1.33
O sites vacant 34 15 9 4 2
(%)
Ti sites vacant 4 15 19 23 26
(%)
Generally, transition metals and heavy metal oxides provide an evidence for the formation of non-stoichiometric compounds. Non-stoichiometric compounds also show fluorescence, semi-conductivity and centers of colour. They own their existence to high activation energy of invariable phases when they are annealed. It has also been noticed that non-stoichiometric compounds are stable above critical temperature below which they break up into phases approximating to rational formulae. For example, tantalum hydride at lower temperature breaks up into $Ta_2H$ and nearly hydrogen-free tantalum.
Researches have shown that at high temperatures, solid ionic compounds become ionic conductors through migration of ions within crystalline solid. For example, $Cr_2O_3$ has unpaired electrons in partly filled of levels of transition metal ions. These electrons are not mobile.
Related posts:
1. Crystal Defects Crystal Defects An ideal crystal is one which has the...
2. Space Lattice and Unit Cell Space Lattice and Unit Cell According to Hauy (1784) a...
3. Crystalline state Crystalline state “A crystal is a solid composed of atoms...
4. Spacing of Planes Spacing of Planes The distance between successive lattice planes of...
5. Nuclear Binding Energy Nuclear binding energy It may be defined as the energy... | {} |
# Isomorphism between Endomorphism(R^n) and Mn(R^op)
My professor said in class that if we consider $R^n$ as an $R$-module, we get a bijection from the $\operatorname{End}(R^n)$ to the set of $n\times n$ matrices since we can regard any such homomorphism as a linear map from the vector space defined by $R^n$ onto itself.
But what I don't get is the following, he said that this bijection is not a ring homomorphism, but there is a ring isomorphism between $\operatorname{End}(R^n)$ to $M_n(R^\mathrm{op})$ where $R^\mathrm{op}$ is the opposite ring.
I know why this is true for a trivial module $M$ over $R$ where $M = R$. However I fail to see how this is the case for $R^n$ since I can simply identify a homomorphism from $R^n$ onto itself with a matrix, can anyone explain the technical details for how the opposite ring comes into the picture?
Thanks.
• – Ennar Nov 26 '17 at 12:27
For any $R$ module $N$, $End(\oplus_{i=1}^n N)\cong M_n(End(N))$, where the endomorphisms are all $R$ linear.
If you consider $R^n$ as a right $R$ module then, its endomorphism ring is isomorphic to $M_n(End(R_R))=M_n(R)$ since $End(R_R)\cong R$.
But if you consider $R^n$ as a left $R$ module, then you get $M_n(End(_RR))$, and $End(_RR)\cong R^{op}$ instead of $R$.
I think you must be considering $R^n$ as a left $R$-module in class. | {} |
## Performing crossover on trees in genetic algorithm
I’m using genetic algorithm for solving a problem. Each chromosome is a B* tree (where each node has only 2 child nodes).
I’m wondering how to perform the crossover. I found an example which says that crossover points are fixed in parents, at which the tree is sliced, and the child is constructed from sliced sub-trees.
But I know I need to consider more things. Like, what if the child tree is not balanced? Or, what if the child contains duplicate items?
Any advice how I should proceed with these questions in mind? Thanks in advance.
## How Segment trees are used to answer interval stabbing query?
I’m new here and to Algorithms in general. I have a question that bugs and I have searched google without any fruitful results sadly.
Can anybody explain to me how segment trees are used to answer interval stabbing queries? I have searched and searched and only come with the beginning of the line. From my understanding I need to take all the end points of every interval and build a segment tree then what?
How can we know the original intervals now? How can we check whether a given point is present in any of them and report all those intervals with the point in them.
## In the right-rotation case for red-black trees, there is a less efficient way to recolor it, why is it not O(log(n)) anymore?
So the first time I tried to recolor this insertion case from memory, I ended up with the right-hand side recoloring; of course the left recoloring is more efficient as the loop ends at this point. If however, we check in the right case for whether the grandparent of a is the root (color it black) and otherwise continue the loop from that node, I read that it makes the recoloring not O(log(n)) anymore, why is that? It still seems to me to be O(log(2n)) at worst, even if the number of rotations performed is not O(1) anymore.
## Cutting down trees for 5G
Have you all noticed trees getting cut down all on a residential streets and / or near commercial bulding. trees are being removed because they block 5G signals that cant get trough the leaves or the tree stem.
## RB trees from any balanced BST?
Given any perfectly balanced Binary Search tree, is it always possible to assign a coloring to the nodes so that it becomes a Red-Black tree? If so, how do you prove this, and if false, what would be a counterexample?
## Number of Spanning Trees
We are given a graph G with N(numbered 1 to N) nodes and M edges.
Let’s create a big graph B with NK nodes (numbered 1 through NK) and MK edges. B consists of K copies of G ― formally, for each k (0≤k≤K−1) and each edge between nodes u and v in G, there is an edge between nodes u+kN and v+kN in B.
We take the complement of the graph B and name it H.
The task is to calculate the number of spanning trees in H.
Note : The maximum number of nodes in graph G is 20. The maximum value of K is 10^8. The graph G doesn’t have any self-loops or multiple-edges.
My approach : When the value of K is less than 40, I am being able to apply Kirchoff’s Matrix Tree theorem with a complexity of O(N^3)(Gaussian elimination gives us a O(n^3) algorithm to compute determinants). However,my solution doesn’t scale for very large graphs which are formed for high values of K. I am figuring I should be able to derive a formula to calculate the number of spanning trees because of the repeated copies of small graph G. But because of my limited mathematical knowledge, I am not being able to converge to a formula.
Any help would be greatly appreciated.
## Number of distinct BFS, DFS trees in a complete graph
What is the number of distinct BFS, DFS in a complete graph?
My approach is like this–>
For DFS take a node, we have (n-1) possible tree edges, next we have all possible (n-2) tree edges,…….finally 1 possible tree edge. and we can choos any of n nodes as rooot node. hence no of distinct DFS trees= n*(n-1)(n-2)….1= n!
For BFS tree if we take a node as root, we have to explore all its neighbors , so the number distinct of BFS trees =no of nodes we can choose as root ie. n .
Is this approach correct?
## Number of compatible trees with an ancestry matrix
Suppose you are given an ancestry matrix $$M$$ which means that $$M[ij] = 1$$ iff node $$i$$ is an ancestor of node $$j$$. If $$M$$ represents no cycles (treated as an adjacency matrix) the corresponding graph is a tree (or a forest). My question is what is the number of trees where their ancestry matrix is $$M$$.
Is this question any simpler than counting number of directed graphs which are compatible to a general ancestry matrix?
## Directed Trees: Finding all the edges and vertices in a specific direction
I am an electrical engineer without experience in graph theory. However, I have a problem which I believe can be solved by graph theory. We have a directed tree, such as the one below. We want to find all the vertices and edges starting from a vertex in a given direction. For instance, in the figure below, we want to identify the vertices and edges downward from vertex 1 (the red lines). Is there an algorithm to do this (preferably in MATLAB)? The edges have equal weight.
## Can an optimal global classification tree be constructed from trees for single categories?
Suppose we have a set of objects, each belonging to one of the disjoint categories $$c_1, … , c_n$$. Suppose further that for every single category $$c_i$$ there is a corresponding set $$T_i$$ that contains all optimal binary classification trees that only split the objects based on whether they belong to $$c_i$$ or not. Optimal is defined as having the least sum of distances from the root to the leaf nodes.
Question: Can the sets $$T_1, …, T_n$$ be used effectively to construct a single global binary classification tree that classifies objects for every category and is also optimal? | {} |