text
stringlengths 5k
473k
|
---|
Flying Volt Meter
Moving on from my three phase BLDC motor project I've been developing some interesting uses for such motors and their controllers. In this project I'll demonstrate how I've used the flying head drum from a VHS video cassette recorder to help me make a rotary volt meter. The volt meter relies on persistence of vision in order to create an apparent image of an arc whose length can be changed by an input voltage. The relative position of the arc around the circle can be altered by a separate voltage input as you will see later on.
The idea here is to have an LED stuck to the edge of a quickly rotating disk such that when it is illuminated there appears to be a solid loop of light on observation. If said LED were to be pulsed at the same rate (or some multiple) of the motor speed, then a stable pattern would appear. The length of the trace is adjusted by how long the LED is on for each rotation. The position of the trace about the circle can also be altered by delaying the start of the trace. By delaying the start and stop one has complete control over when it starts and when it ends. Providing the motor spins at a constant speed, the positions will be clearly defined by the start and stop delay times. If the motor speed should change, then the relationship between time and arc length changes too. I may add a circuit that compensates for this but for now it's best just to keep the motor at one speed.
Check out these videos for a working example of what I'm doing:
As you can see the effect is very vivid and fluid. The limitation on how fast the arc's length can change while still keeping the appearance smooth is the motor speed. The faster this rotates, the faster the arc can change length while remaining smooth. If you alter the length or position too fast you end up with aliasing effects and the result looks choppy or heavily aliased. (This is like digitally sampling audio which contains frequencies higher than 1/2 the sampling frequency.)
On we move to how this works. Firstly, once we have a spinning motor and some signal indicating the speed of the motor we need a circuit which will generate the proper timing to light the LED in sequence with the motor's rotation and allow us to control the light time with an input voltage. Shown below is such a circuit which creates both a delay for the start of the trace and another for the end of the trace.
If you look closely you can see 1µF capacitors which are discharged by 2N3904 transistors every time they receive a pulse to their base. The capacitors are charged by the 2N3906 current mirrors attached to them at the top of the schematic. The inputs of the current mirrors are 47k resistors and 50k potentiometers. These current mirrors pass a nearly constant current whose magnitude is adjustable with the 50k pots. The constant current charges the capacitors at a constant rate and thus a ramp generator is formed. What happens then is the voltage is steadily increasing at a certain rate for both of those timing capacitors until they are discharged by the incoming timing pulses.
The first timing circuit receives its discharge pulse from the motor SYNC signal; I'll show how we generate SYNC later on. The important thing is that every time the motor rotates one turn, the capacitor is discharged to reset our timing circuit. Notice that the first timing capacitor has a comparator connected to it. The comparator's job is to compare the voltage across the capacitor to a reference voltage which is set by the Vstart terminal. As the capacitor charges past the Vstart threshold, the output of the comparator will transition high and send a pulse to discharge the timing capacitor of the next timing circuit. (Note that I'm using the comparator's output in an unusual way: the output is stuck to 5V+ and the "ground" pin is being used as the output. This essentially makes the output inverted since the comparator's output actually looks like a transistor across from pin 7 to pin 1. The transistor is floating so you can reference it to whatever you like within the voltage supply of the comparator.) Obviously by varying Vstart we can change the amount of delay before the second timer is reset after each motor SYNC pulse. This Vstart voltage thus controls the beginning of our trace because the LED DRIVER output will go high whenever the second timing capacitor is below the Vstop threshold.
The second timing circuit works just like the first one except I'm using its comparator's output in the normal way. Pin 1 is grounded and the output is pulled high with a 470ohm resistor. Every time the second timing capacitor is reset it starts charging steadily from zero volts until it passes the threshold set by Vstop. As it passes this threshold the ouptut of the second comparator will go low, turning off the LED on the spinning drum.
If you've managed to follow this complex operation you can see that Vstart tells the circuit how long to wait after the motor completes a revolution before turning on the LED and Vstop says when to stop the illumination once it's been started. Thus Vstart begins the trace at some point in the motor's rotation and Vstop stops it.
Once the motor is spinning you can apply various waveforms to the Vstart and Vstop terminals and watch the trace move around or expand accordingly. The circuit is designed to accept an input voltage from 0V to 5V. The 50k potentiometers were included so you can adjust 5V to be equivalent to 360° of motor rotation. This makes it so that for Vstop 0V makes no trace, 2.5V makes half a trace, 5V makes a full trace, and so on. The same applies for the Vstart: set the potentiometer so that 5V Vstart rotates the beginning of the trace 360° from 0V Vstart.
The 47k resistor and 50k pot on each current mirror for each timing circuit were selected based on a motor which spins at approximately 2700RPM. If the motor is to be operated at vastly different speed then the adjustment range of the potentiometer may not be enough to accommodate this. For example, if the motor were to rotate at 8000RPM then the capacitors would need to charge to 5V in 7.5ms. The 2700RPM motor only needed a 22.2ms charge time and so you can see that a much higher charge current is necessary to get the capacitors to 5V in time. To figure out the period of the motor's rotation is very simple: take RPM/60=RPS(rotations per second). Take 1/RPS to get the period of the rotation in seconds. Now you know how many seconds (milliseconds in this case) it takes for the motor to complete a full revolution.
The formula for calculating charge current is based on the well known capacitor formula: I = CdV/dt. This differential equation defines how much current in Amperes is required to charge a certain capacitance in Farads to a certain voltage in Volts within a certain length of time in Seconds.
For our 8000RPM example we want to charge 1µF to 5V in 7.5ms. That's .000001F, 5V, and .0075s. If we plug in those values we end up with I = .000001 * 5 / .0075 = .0006667. That means we need 666.7µA to charge the capacitor in time. From this we can use ohm's law to calculate the correct resistor value for the combined potentiometer and limiting resistor. The voltage across these resistors is equal to 12V minus whatever is used up by the current mirror which is about 0.650 (one diode drop).
Ohm's law says that R = V/I: resistance is equal to voltage divided by current. We know we have 12-0.65V=11.35V to drop across the resistor. Since the amount of current to charge the cap is 666.7µA we can plug these numbers in and get R = 11.35/.0006667 = 17.02k. The sum of the pot and limiting resistor resistances should be equal to this value when the pot is rotated mid-way. The reason for this is so that the pot can be used to calibrate our circuit above and below the design-center value to account for errors in the entire system. If we set the pot to be half of the total resistance when it's rotated half way then it will have to be a 17.02kohm pot. It's easier to find a 20kohm potentiometer so we'll use that instead. Then pick a limiting resistor half the total value: 17k/2 = 8.5k. Since 8.5k resistors are very uncommon, we'll settle for 8.2k which is a standard value. Now we have a circuit which is set up properly for an 8000RPM motor.
This document assumes that we already have a motor and LEDs attached to it, a circuit to drive said LEDs, and a way of telling how fast the motor is turning. If you're doing this as a project then it will help to have information on how to set these essential items up to work with my timing generator circuit described above. Shortly I will add another page to this series so you can see how I developed the circuitry to generate a timing (SYNC) signal from my motor and how I powered the LEDs which are attached to the motor. |
Here is a chance to play a version of the classic Countdown Game.
Place the numbers from 1 to 9 in the squares below so that the difference between joined squares is odd. How many different ways can you do this?
This article gives you a few ideas for understanding the Got It! game and how you might find a winning strategy.
Arrange the four number cards on the grid, according to the rules, to make a diagonal, vertical or horizontal line.
Can you see why 2 by 2 could be 5? Can you predict what 2 by 10 will be?
Starting with the number 180, take away 9 again and again, joining up the dots as you go. Watch out - don't join all the dots!
A and B are two interlocking cogwheels having p teeth and q teeth respectively. One tooth on B is painted red. Find the values of p and q for which the red tooth on B contacts every gap on the. . . .
Start by putting one million (1 000 000) into the display of your calculator. Can you reduce this to 7 using just the 7 key and add, subtract, multiply, divide and equals as many times as you like?
How have the numbers been placed in this Carroll diagram? Which labels would you put on each row and column?
Mr McGregor has a magic potting shed. Overnight, the number of plants in it doubles. He'd like to put the same number of plants in each of three gardens, planting one garden each day. Can he do it?
If you have only four weights, where could you place them in order to balance this equaliser?
Can you explain the strategy for winning this game with any target?
Investigate the smallest number of moves it takes to turn these mats upside-down if you can only turn exactly three at a time.
You can move the 4 pieces of the jigsaw and fit them into both outlines. Explain what has happened to the missing one unit of area.
What can you say about the values of n that make $7^n + 3^n$ a multiple of 10? Are there other pairs of integers between 1 and 10 which have similar properties?
These formulae are often quoted, but rarely proved. In this article, we derive the formulae for the volumes of a square-based pyramid and a cone, using relatively simple mathematical concepts.
This problem is based on a code using two different prime numbers less than 10. You'll need to multiply them together and shift the alphabet forwards by the result. Can you decipher the code?
Use the interactivity to play two of the bells in a pattern. How do you know when it is your turn to ring, and how do you know which bell to ring?
A collection of resources to support work on Factors and Multiples at Secondary level.
You have 4 red and 5 blue counters. How many ways can they be placed on a 3 by 3 grid so that all the rows columns and diagonals have an even number of red counters?
There are nine teddies in Teddy Town - three red, three blue and three yellow. There are also nine houses, three of each colour. Can you put them on the map of Teddy Town according to the rules?
Slide the pieces to move Khun Phaen past all the guards into the position on the right from which he can escape to freedom.
Place the numbers 1 to 10 in the circles so that each number is the difference between the two numbers just below it.
Find out what a "fault-free" rectangle is and try to make some of your own.
This 100 square jigsaw is written in code. It starts with 1 and ends with 100. Can you build it up?
Choose a symbol to put into the number sentence.
First Connect Three game for an adult and child. Use the dice numbers and either addition or subtraction to get three numbers in a straight line.
In this activity, the computer chooses a times table and shifts it. Can you work out the table and the shift each time?
Is it possible to place 2 counters on the 3 by 3 grid so that there is an even number of counters in every row and every column? How about if you have 3 counters or 4 counters or....?
Try to stop your opponent from being able to split the piles of counters into unequal numbers. Can you find a strategy?
Start with any number of counters in any number of piles. 2 players take it in turns to remove any number of counters from a single pile. The winner is the player to take the last counter.
A game for two people, or play online. Given a target number, say 23, and a range of numbers to choose from, say 1-4, players take it in turns to add to the running total to hit their target.
The number of plants in Mr McGregor's magic potting shed increases overnight. He'd like to put the same number of plants in each of his gardens, planting one garden each day. How can he do it?
A game for 2 players that can be played online. Players take it in turns to select a word from the 9 words given. The aim is to select all the occurrences of the same letter.
Imagine a wheel with different markings painted on it at regular intervals. Can you predict the colour of the 18th mark? The 100th mark?
The aim of the game is to slide the green square from the top right hand corner to the bottom left hand corner in the least number of moves.
A game for 2 players. Can be played online. One player has 1 red counter, the other has 4 blue. The red counter needs to reach the other side, and the blue needs to trap the red.
Do you know how to find the area of a triangle? You can count the squares. What happens if we turn the triangle on end? Press the button and see. Try counting the number of units in the triangle now. . . .
Use the interactivity to listen to the bells ringing a pattern. Now it's your turn! Play one of the bells yourself. How do you know when it is your turn to ring?
Here is a solitaire type environment for you to experiment with. Which targets can you reach?
The idea of this game is to add or subtract the two numbers on the dice and cover the result on the grid, trying to get a line of three. Are there some numbers that are good to aim for?
What do the numbers shaded in blue on this hundred square have in common? What do you notice about the pink numbers? How about the shaded numbers in the other squares?
Can you put the numbers 1 to 8 into the circles so that the four calculations are correct?
Place six toy ladybirds into the box so that there are two ladybirds in every column and every row.
Watch this film carefully. Can you find a general rule for explaining when the dot will be this same distance from the horizontal axis?
A game for 1 or 2 people. Use the interactive version, or play with friends. Try to round up as many counters as possible.
A game for 1 person to play on screen. Practise your number bonds whilst improving your memory
Can you fit the tangram pieces into the outlines of the chairs?
Can you fit the tangram pieces into the outlines of the lobster, yacht and cyclist?
Identical discs are flipped in the air. You win if all of the faces show the same colour. Can you calculate the probability of winning with n discs? |
decimal number system
râžmân-e adadhâ-ye dahdahi
Fr.: système des nombres décimaux
A system of numerals for representing real numbers that uses the → base 10. It includes the digits from 0 through 9.
Fr.: nombre d'Ekman
A → dimensionless quantity that measures the strength of → viscous forces relative to the → Coriolis force in a rotating fluid. It is given by Ek = ν/(ΩH2), where ν is the → kinematic viscosity of the fluid, Ω is the → angular velocity, and H is the depth scale of the motion. The Ekman number is usually used in describing geophysical phenomena in the oceans and atmosphere. Typical geophysical flows, as well as laboratory experiments, yield very small Ekman numbers. For example, in the ocean at mid-latitudes, motions with a viscosity of 10-2 m2/s are characterized by an Ekman number of about 10-4.
Fr.: nombre d'Elsasser
A → dimensionless quantity used in → magnetohydrodynamics to describe the relative balance of → Lorentz forces to → Coriolis forces. It is given by: Λ = σB2/(ρΩ), where σ s the → electrical conductivity of the fluid, B is the typical → magnetic field strength within the fluid, ρ is the fluid → density, and Ω is the → angular velocity. A typical value for the Earth is Λ ~ 1.
Named after Walter Maurice Elsasser (1904-1991), American theoretical physicist of German origin; → number.
Fr.: nombre exact
A value that is known with complete certainty. Examples of exact numbers are defined numbers, results of counts, certain unit conversions. Some examples: there are exactly 100 centimeters in 1 meter, a full circle is exactly 360°, and the number of students in a class can exactly be 25.
adad-e kânuni (#)
Fr.: nombre d'ouverture
Same as → focal ratio.
Fr.: nombre de Fermat
Fr.: nombre de Fobonacci
One of the numbers in the → Fibonacci sequence.
Fr.: nombre de Froude
A → dimensionless number that gives the ratio of local acceleration to gravitational acceleration in the vertical.
Named after William Froude (1810-1879), English engineer.
adad-e zarrin (#)
Fr.: nombre d'or
1) The number giving the position of any year in the lunar or
→ Metonic cycle of about 19 years.
Each year has a golden number between 1 and 19. It is found by adding
1 to the given year and dividing by 19; the remainder in the division
is the golden number. If there is no remainder the golden number
is 19 (e.g., the golden number of 2007 is 13).
Greenwich sidereal day number
šomâre-ye ruz-e axtari-ye Greenwich
Fr.: nombre du jour sidéral de Greenwich
The integral part of the → Greenwich sidereal date.
Hagen number (Hg)
Fr.: nombre de Hagen
named after the German hydraulic engineer Gotthilf H. L. Hagen (1797-1884); → number.
Fr.: nombre Harshad
A number that is divisible by the sum of its digits. For example, 18 is a Harshad number because 1 + 8 = 9 and 18 is divisible by 9 (18/9 = 2). The simplest Harshad numbers are the two-digit Harshad numbers: 10, 12, 18, 20, 21, 24, 27, 30, 36, 40, 42, 45, 48, 50, 54, 60, 63, 70, 72, 80, 81, 84, 90. They are sometimes called Niven numbers.
The name Harshad was given by Indian mathematician Dattaraya Kaprekar (1905-1986) who first studied these numbers. Harshad means "joy giver" in Sanskrit, from harṣa- "joy" and da "to give," → datum.
adad-e HD (#)
Fr.: numéro HD
An identifying number assigned to the stars in the Henry Draper catalog. For example, the star Vega is HD 172167.
Fr.: nombre imaginaire
A number that is or can be expressed as the square root of a negative number; thus √ -1 is an imaginary number, denoted by i; i2 = - 1.
Fr.: nombre entier, entier
Fr.: nombre irrationnel
A → real number which cannot be exactly expressed as a ratio a/b of two integers. Irrational numbers have decimal expansions that neither terminate nor become periodic. Every → transcendental number is irrational. The most famous irrational number is √ 2.
Fr.: nombre isotopique
The difference between the number of neutrons in an isotope and the number of protons. Neutron excess.
Fr.: grand nombre
A → dimensionless number representing the ratio of
various → physical constants. For example:
large number hypothesis
engâre-ye adadhâ-ye bozorg
Fr.: hypothèse des grands nombres
The idea whereby the coincidence of various → large numbers would bear a profound sense as to the nature of physical laws and the Universe. Dirac suggested that the coincidence seen among various large numbers of different nature is not accidental but must point to a hitherto unknown theory linking the quantum mechanical origin of the Universe to the various cosmological parameters. As a consequence, some of the → fundamental constants cannot remain unchanged for ever. According to Dirac's hypothesis, atomic parameters cannot change with time and hence the → gravitational constant should vary inversely with time (G∝ 1/t). Dirac, P. A. M., 1937, Nature 139, 323; 1938, Proc. R. Soc. A165, 199.
large Reynolds number flow
tacân bâ adad-e bozorg-e Reynolds
Fr.: écoulement à grand nombre de Reynolds
A turbulent flow in which viscous forces are negligible compared to nonlinear advection terms, which characterize the variation of fluid quantities. The dynamics becomes generally turbulent when the Reynolds number is high enough. However, the critical Reynolds number for that is not universal, and depends in particular on boundary conditions. |
Many students at all academic levels see mathematics as a demanding subject to understand.
To solve this problem, websites, software and app developers have created tools that solve mathematical problems with a single click from your phone or personal computer.
Before I became an expert, these websites and apps helped me, especially during my first mathematics exam at my university.
I studied and solved mathematics problems using these websites and apps to pass my exams.
Mathematics apps were my guide when studying alone.
It assisted me in explaining and walking through different methods to understand the complexities of calculus, the tricks of trigonometry, and many other topics.
Table of contents
- Websites & Apps That Answer Maths Problems
- Math Tutorial Websites and Apps
Websites & Apps That Answer Maths Problems
The following are apps that answer mathematical problems;
- Math Scanner By Photo
- Check Math
- Mal Math
- Quick Math
- Math Way
- Microsoft Math Solver
- Web Math
- GeoGebra Classic
- Komodo Math
This app is for kids struggling to get their homework done.
It provides them with step to step method of solving math problems,
yHomework is the app that concentrates on algebra, arithmetics, trigonometry, and basic maths.
Children can identify their errors on their own and correct them using the yHomework app.
To use this Math solver, enter your math question or equation, and the app will provide you with the answer with the step-by-step approach.
Download yHomework from the play store or iOS store for a better learning process.
Math Scanner By Photo
Math Scanner by Photo is a math solver app, an educational app that provides step-by-step answers to various mathematics operations.
Snap a photo of your math problem, draw it with your finger on the screen or enter it directly from the calculator to use the app.
This app provides solutions to math problems related to symbols, geometry, formulas, etc.
The app can do time conversion, volume, weight, temperature, speed, etc.
It offers game practice for people who love to have fun solving math questions.
Try to download it to your android or iPhone or even the software on your PC for proper usage.
Check Math is an app especially for checking your homework.
The app helps teachers, parents, and scholars identify and correct math errors.
It uses AI image recognition.
Users get solutions to maths problems by taking a picture of their math problems.
It covers a variety of math problems like addition, subtraction, multiplication, division, vertical calculations, number series, and many more.
Check math app is not difficult to operate.
Tap on the bottom Homework Check to identify your handwritten and printed text.
It is available on the play store and iOS store, try and install it for better performance.
Photomath covers a range from basic math to calculus and trigonometry.
It is a great Android app to assist and explain tricky math sets.
You can use it by taking a photo of a math problem to get a detailed step-by-step answer and explanation.
Photmath is a free tool with incredible functionality of showing explanations of every math problem.
Since many maths problems are the same but are not the same topic like x² + y² = 2, Solve for y.
You can find this question in differentiation, polynomials, etc.
In order not to confuse the photomath, select the topic immediately after you snap the math problem for easy comprehension and to avoid mistakes.
For better use of the app, install it on your phone (android, tablet, or iPhone).
This app solves math problems relating to trigonometry, derivatives, limits, integrals, algebra, logarithms, and equations.
The reason for this all is to provide you with a step-by-step explanation about how it did it and a graph view.
To use Mal Math well, type in the math question and then click on solve button.
Download the Mal Math is available on iOS and Android devices.
This app allows scholars to learn math in English, Spanish, Chinese, Japanese and other languages.
It provides a step-by-step solution to your math problem.
Type in your math problem into the math calculator or snap the problem.
Cymath solves problems from the following topics;
- Complex numbers
- Quadratic equations
- Partial fraction
Since Cymath is a math app, you can open an account and use it for free.
But if you want the benefits of getting referral materials and more, you can upgrade to premium to enjoy that.
For a better learning experience, install the cymath app from the Google app store or iOS store.
Quick math has a friendly user interface that offers spontaneous step-by-step answers to virtually solve any math question in inequalities, matrices, graphs, equations, calculus, algebra, and numbers.
It also solves polynomials and graph equations.
On a quick math website, there exist seven different sections containing commands and arithmetic that suit the kind of math problems it solves.
This website provides step-by-step answers to any math problem calculators with more precision and comfortability.
The quick math website has a tutorial page where any student can undergo lessons on various math topics.
Quick math is also available in the app.
Go to the play store or iOS store to install it for better benefits.
Math Way is a website that provides answers to mathematics problems with steps.
It is a problem-solving app formulated to solve math equations by explaining the steps required to arrive at the correct answer.
The website focuses on the following maths problems;
- Basic math
- Finite math
- Linear algebra
The website is suitable for students learning at home, those who do not have access to a tutor, or even adults who wish to check their math skills and methodology.
Just type or take a picture of it in your math way and expect answers.
If the user selects to upgrade their account to premium, there will be an extra privilege of step by step path to the solution will be provided.
Just install the Mathway app from the play store, iOS, for a more user-friendly interaction with Mathway math problem solver.
Microsoft Math Solver
Microsoft Math Solver is another user-friendly website that provides answers to math problems.
It answers math problems in algebra, pre-algebra, trigonometry, and calculus.
You can also install the Microsoft app solver on the play store or app store for a better learning experience Microsoft math solver.
To allow this website to serve you better;
- Open the site
- Select your type of Math problem,
- Type your question
- Click solve
- Wait for your answers
Webmath is a website that gives step-by-step answers to math problems that the user enters into the app.
It has a procedure for you to indicate the math topic in which your question falls.
It is free of charge.
Webmath website is accurate for solving math questions on the following topics;
- Complex numbers
- Data analysis
- Simple and compound interest
- Polynomials, etc.
Just type your problem, and click the Solve or Do it button.
Web math will provide the answers to you.
Remember, the app is available on the play store or iOS store, so go ahead and download it.
Symbolab app answers math problems in algebra, pre-algebra, calculus, functions, matrix, vector, geometry, trigonometry, statistics, conversion, and to ice the cake, chemistry calculations.
It offers step-by-step answers to math problems scanned or typed by the user.
You can install the app at the play store for a better learning experience.
GeoGebra Classic answers math questions from geometry, spreadsheets, digital algebra, and probability.
It puts them neatly together in an easy, user-friendly package.
The key focus of this app is on graphs, making it simple for students to plot out coordinates and practically analyze points on the Cartesian plane.
Install the app and use the instructions given and plot the graph.
Brainly app provides a platform where students can ask questions about their math assignments, and other students can help.
It is a student-to-student app.
Customer support is top-notch as agents are ready to give help with your problem.
Place your camera icon on your question and scan it.
The brainly app will offer you the solution.
You can also attach images and voice messages.
This app is for kids.
It works on iPad, Android tablets, laptops, and smartphones.
The reason for the app is to set up a solid mathematical foundation through personalized plans drawn up by qualified teachers.
It has game-based rewards mechanisms that make learning math fun and take the sting out of the struggle of a problem.
Komodo also concentrates on understanding mathematics.
It ensures that children master the basics or foundation of every math problem before they move on to the more complex mathematical problems.
Download the app to your device, create an account and play the game.
Math Tutorial Websites and Apps
You might need tutorial websites and apps also.
So as an expert, I have provided you with 14 tutorial websites and apps that could help you understand mathematics the most.
The following are math tutorial websites and apps;
- Khan Academy
- Cue Math
- Nure Math
- Math Word Problems
- Online Math Problem Solver
- Rocket Math
- Math Master
- Math Trick
Khan Academy is an educational website that focuses on teaching scholars on different topics in mathematics.
As an expert, I recommend this app for students who want to understand mathematics.
It is because the app provides not only tutorials but also exercises.
Cue Math is an online educational Math app designed for scholars who want a one-on-one tutorial from a math expert.
In this software, students learn math by working out problems on their own under the cautious advice of their teacher.
This app is one of the educational apps helping to train scholars in mathematics.
Just download and install it on your mobile phone.
Math Word Problems
This website answers math problems and specializes in word problems.
It has over 30,000 maths and more than 2,000,000 answered word problems in mathematics.
For this reason, questions you might seek answers to are available on the website.
Complete your registration to get a study plan and take courses on math word problems.
It is also an app, so getting the app on the play store is necessary for a more efficient learning experience.
The Basic Mathematics is another free math solver website that can assist you in checking out all you need for an advanced understanding of mathematics problems in pre-algebra, algebra, geometry, graphing, calculus, trigonometry, statistics, etc.
The website offers users with step by step lessons on various math topics.
It also has a math calculator that provides step-by-step answers to math problems typed in by the user.
Online Math Problem Solver
This website does not only provide answers to maths problems but also provides solutions to chemistry questions.
It has a user-friendly interface to solve maths problems such as calculus, algebra, trigonometry, geometry, polynomial division, and matrix.
Since it is a website, you don’t need to install it to get your questions solved.
Rocket Math is a fun app whose backbone is entertainment for learning.
The game-based approach provides an incredible way for children to learn and memorize math rapidly without frustration.
This easy technique facilitates an easy-to-use app to assist students and deepen their math abilities.
Use the same way you use rocket math.
Another fun app to spur the excitement in learning mathematics.
Prodigy math is a fantasy-based math game for children to exercise the basics.
The reporting system is one of the best features of this app, making it simple for teachers and parents to specify and target the topics a student is struggling to understand.
It has a design for elementary-aged students in mind.
This app is for the daily practice of mathematics.
It is a quiz app helping students get acquainted with math problems and solutions.
This software is for those who want to speed up their calculating momentum.
Using math tricks will help solve part of the mathematical problems and tasks much more rapidly.
It is also helpful to those who want a math foundation like the multiplication table.
It keeps your brain active when it comes to calculating or solving maths.
Websites and applications facilitate a proper understanding of maths problems.
They provide step-by-step answers to maths problems.
You have discovered that we have many of them.
Most maths apps are free, so you don’t need to worry about data subscriptions as a student.
Just keep using them and see yourself become a math guru in time to come.
Some websites and apps could offer tutorial classes online.
You can go for them for mastery of the subject, of mathematics.
Join my Facebook Group for more interactive discussions. |
(a) Draw the plot of binding energy per nucleon (BE/A) as a function of mass number A. Write two important conclusions that can be drawn regarding the nature of nuclear force.
(b) Use this graph to explain the release of energy in both the processes of nuclear fusion and fission.
(c) Write the basic nuclear process of neutron undergoing –decay. Why is the detection of neutrinos found very difficult?
Graphical representation of (BE/A) for nucleons with mass number A.
The variation of binding energy per nucleon VS. mass number is shown in the figure:
Characteristics of Nuclear force:
(i) Nuclear forces non-central and short ranged force.
(ii) Nuclear forces between proton-neutron and neutron-neutron are strong and attractive in nature.
b) When a heavy nucleus (A > 235 say) breaks into two lighter nuclei (nuclear fission), the binding energy per nucleon increases i.e, nucleons get more tightly bound. This implies that energy would be released in nuclear fission.
When two very light nuclei (A £10) join to form a heavy nucleus, the binding is energy per nucleon of fused heavier nucleus more than the binding energy per nucleon of lighter nuclei, so again energy would be released in nuclear fusion.
c) During the decay process of neutron, we have
Neutrinos show weak interaction with other particles. Hence, its detection is very different.
(a) Define electric dipole moment. Is it a scalar or a vector? Derive the expression for the electric field of a dipole at a point on the equatorial plane of the dipole.
(b) Draw the equipotential surfaces due to an electric dipole. Locate the points where the potential due to the dipole is zero.
Electric dipole moment is the product of either charges or the distance between two equal and opposite charges.
It is a vector quantity.
Electric dipole moment at a point on the equatorial plane:
Consider a point P on broad side on the position of dipole formed of charges + q and - q at separation 2l. The distance of point P from mid-point O of electric dipole is r.
Let E1 and E2 be the electric field strength due to charges +q and –q of electric dipole.
From the fig. we have
Now, inorder to find the resultant electric field, we resolve the components along and perpendicular to AB.
The components perpendicular to AB are sin components and they being equal and opposite to each other cancel each other.
Resultant electric field is given by,
E1 = E1
From the fig. we can see that,
If dipole is infinitesimal and point P is far away, then l2 can be neglected as compared to r2.
b) Equipotential surfaces due to an electric dipole is given by,
Electric potential is zero at all points in the plane passing through the dipole equator.
Using Gauss’ laws deduce the expression for the electric field due to a uniformly charged spherical conducting shell of radius R at a point (i) outside and (ii) inside the shell.
Plot a graph showing variation of electric field as a function of r > R and r < R. (r being the distance from the centre of the shell).
i) Consider a uniformly charged thin spherical shell of radius R carrying charge Q. To find the electric field outside the shell, we consider a spherical Gaussian surface of radius (r >R), concentric with given shell. If E is electric field outside the shell, then by symmetry electric field strength has same magnitude E0 on the Gaussian surface and is directed radially outward.
So, electric flux through Gaussian surface is given by,
Charge enclosed by the Gaussian surface is Q.
Therefore, using gauss’s theorem, we have
Thus, electric field outside a charged thin spherical shell is the same as if the whole charge Q is concentrated at the centre.
ii) Electric field inside the shell:
The charge resides on the surface of a conductor. Thus, a hollow charged conductor is equivalent to a charged spherical shell. Let’s consider a spherical Gaussian surface of radius (r < R). If E is the electric field inside the shell, then by symmetry electric field strength has the same magnitude Ei on the Gaussian surface and is directed radially outward.
Electric flux through the Gaussian surface is given by,
Now, Gaussian surface is inside the given charged shell, so charge enclosed by Gaussian surface is zero.
Therefore, using Gauss’s theorem, we have
Thus, electric field at each point inside a charged thin spherical shell is zero.
The graph above shows the variation of electric field as a function of R.
Using Bohr’s postulates, derive the expression for the frequency of radiation emitted when electron in hydrogen atom undergoes transition from higher energy state (quantum number ni ) to the lower state, (nf ).
When electron in hydrogen atom jumps from energy state ni =4 to nf =3, 2, 1, identify the spectral series to which the emission lines belong.
According to Bohr’s postulates, in a hydrogen atom, a single electron revolves around a nucleus of charge +e. For an electron moving with a uniform speed in a circular orbit on a given radius, the centripetal force is provided by the Coulomb force of attraction between the electron and the nucleus.
So, Kinetic Energy, K.E =
Potential energy is given by, P.E =
Therefore, total energy is given by, E = K.E + P.E =
E = , is the total energy.
For nth orbit, E can be written as En,
Now, using Bohr's postulate for quantization of angular momentum, we have
Putting this value of v in equation (1), we get
Now, putting value of rn in equation (2), we get
R is the rydberg constant.
For hydrogen atom Z =1,
If ni and nf are the quantum numbers of initial and final states and Ei & Ef are energies of electron in H-atom in initial and final state, we have
That is, when electron jumps from ni = 4 to nf = 3.21 .
Radiation belongs to Paschen, Balmer and Lyman series. |
Presentation on theme: "Modelling & Simulation of Chemical Engineering Systems Department of Chemical Engineering King Saud University 501 هعم : تمثيل الأنظمة الهندسية على الحاسب."— Presentation transcript:
Modelling & Simulation of Chemical Engineering Systems Department of Chemical Engineering King Saud University 501 هعم : تمثيل الأنظمة الهندسية على الحاسب الآلى
LECTURE #3 Examples of Lumped Parameter Systems
Last Lecture Conservation Laws; mass, momentum,energy Assumptions Macroscopic & microscopic balances Transport rates Thermodynamic relations Phase Equilibrium Chemical kinetics Degree of Freedom Examples of Mathematical Models for Chemical Processes
Conservation Laws: General Form Conservation laws describe the variation of the amount of a “conserved quantity” within the system over time: (1.1)
Conserved Quantities Typical conserved quantities: Total mass (kg) Mass of an individual species (kg) Number of molecules/atoms (mol) Energy (J) Momentum (kg.m/s)
Examples of Mathematical Models for Chemical Processes Lumped Parameter Systems Example 1. Liquid Storage Tank Our objective is to develop a model for the variations of the tank holdup, i.e. volume of the tank
Example 1. Liquid Storage Tank Assumptions Perfectly mixed (Lumped) density of the effluent is the same as that of tank content. Isothermal
Example 1. Liquid Storage Tank Model Rate of mass accumulation = Rate of mass in - rate of mass out
Example 1. Liquid Storage Tank Model Under isothermal conditions we assume that the density of the liquid is constant.
Example 1. Liquid Storage Tank Model Degree of Freedom Parameter of constant values: A Variables which values can be externally fixed (Forced variable): Ff Remaining variables: L and F o Number of equations: 1 Number of remaining variables – Number of equations = 2 – 1 = 1
Example 2. Isothermal CSTR Our objective is to develop a model for the variation of the volume of the reactor and the concentration of species A and B. a liquid phase chemical reactions taking place :
Example 2. Isothermal CSTR : Assumptions Perfectly mixed Isothermal The reaction is assumed to be irreversible and of first order.
Example 2. Isothermal CSTR : Model Component balance –Flow of moles of A in: F f C Af –Flow of moles of A out: F o C Ao –Rate of accumulation: –Rate of generation:-rV where r (moles/m 3 s) is the rate of reaction.
Example 2. Isothermal CSTR : Model
Example 2. Isothermal CSTR : Degree of Freedom Parameter of constant values: A (Forced variable): F f and C Af Remaining variables: V, Fo, and C A Number of equations: 2 The degree of freedom is f= 3 − 2 =1 The extra relation is obtained by the relation between the effluent flow Fo and the level in open loop
Example 3. CSTR Example A + B P Two streams are feeding the reactor. One concentrated feed with flow rate F1 (m3/s) and concentration CB1 (mole/m3) and another dilute stream with flow rate F2 (m3/s) and concentration CB2 (mole/m3). The effluent has flow rate Fo (m3/s) and concentration CB (mole/m3). The reactant A is assumed to be in excess. The reaction rate:
Example 3. CSTR Example Assumptions: Isothermal, Constant density Total mass balance: Component B balance:
Example 4. Stirred Tank Heater The liquid enters the tank with a flow rate F f (m 3 /s), density f (kg/m 3 ) and temperature T f (K). It is heated with an external heat supply of temperature T st (K), assumed constant. The effluent stream is of flow rate F o (m3/s), density o (kg/m 3 ) and temperature T(K). Our objective is to model both the variation of liquid level and its temperature
Example 4. Stirred Tank Heater ) f F f ( the rate of energy generation is o F o ( )
Example 4. Stirred Tank Heater
We can neglect kinetic energy unless the flow velocities are high. We can neglect the potential energy unless the flow difference between the inlet and outlet elevation is large. All the work other than flow work is neglected, i.e. Wo = 0. There is no reaction involved, i.e. Qr = 0.
Example 4. Stirred Tank Heater
The stirred tank heater is modeled, then by the following coupled ODE's: L(ti) = Liand T(ti) = Ti
Example 4. Stirred Tank Heater Degree of Freedom Parameter of constant values: A, and Cp (Forced variable): Ff and Tf Remaining variables: L, Fo, T, Qe Number of equations: 2 The degree of freedom is therefore, 4 − 2 = 2 Qe = UA H (Tst−T ),
Example 5. Non-Isothermal CSTR The reaction A B is exothermic and the heat generated in the reactor is removed via a cooling system as shown in figure 2.7. The effluent temperature is different from the inlet temperature due to heat generation by the exothermic reaction. The dependence of the rate constant on the temperature: r=kC A = k o e -E/RT C A
Example 5. Non-Isothermal CSTR The general energy balance for macroscopic systems applied to the CSTR yields, assuming constant density and average heat capacity The rate of heat exchanged Qr due to reaction is given by: Q r = −( H r ) V r
Example 5. Non-Isothermal CSTR The non-isothermal CSTR is modeled by three ODE's : r = k o e -E/RT C A V(ti) = ViT(ti) = Ti and CA(ti) = CAi
Example 6. Single Stage Heterogeneous Systems: Multi-component flash drum A multi-component liquid-vapor separator. The feed consists of Nc components with the molar fraction zi (i=1,2… Nc). The feed at high temperature and pressure passes through a throttling valve where its pressure is reduced substantially. As a result, part of the liquid feed vaporizes. The two phases are assumed to be in phase equilibrium.
Example 6. Single Stage Heterogeneous Systems: Multi-component flash drum Assumption: Since the vapor volume is generally small neglect the dynamics of the vapor phase and concentrate only on the liquid phase For liquid phase: Total mass balance: Component balance (i=1,2,….,Nc-1) : Energy balance:
Example 6. Single Stage Heterogeneous Systems: Multi-component flash drum Liquid-vapor Equilibrium (i=1,2,….,Nc) Physical Properties L = f(xi,T,P) v = f(yi,T,P) h = f(xi,T) H= f(yi,T)
Example 7. Reaction with Mass Transfer The reactant A enters the reactor as a gas and the reactant B enters as a liquid. The gas dissolves in the liquid where it chemically reacts to produce a liquid C. The product is drawn off the reactor with the effluent FL. The un-reacted gas vents of the top of the vessel. The reaction mechanism is given as follows: A + B C
Example 7. Reaction with Mass Transfer Assumptions: Perfectly mixed reactor Isothermal operation Constant pressure, density, and holdup. Negligible vapor holdup. mass transfer of component A from the bulk gas to the bulk liquid is approximated by the following molar flux: N A = K L (C* A − C A ) where K L is mass transfer coefficient C A *is gas concentration at gas-liquid interface C A is gas concentration in bulk liquid
Example 7. Reaction with Mass Transfer Liquid phase: Vapor phase: Fv = F A − M A Am N A / A |
Thu May 30 05:48:45 GMT 2013
I have some basic code for developing the algorithm, now I need to make it
work with only 16 bit integer math. Then I need to translate it to
assembly. I could show you what I have so far if you wish. Below is some
ramblings that was on the list earlier that you might find of interest:
Lets start off by assuming a few things before we get started. For this
example, let's assume we have a 4 cyl 4 liter engine and we want an air/fuel
ratio of 14.7/1. This means 14.7 grams of air for each gram of fuel.
now, lets assume this engine is operating at an RPM that gives us 100%
volumetric effency. Since our engine is 1 liter per cylinder, in order to
determine how much fuel to deliver to each cylinder, we need to know the mass
of the air in each cylinder. Remember back in high school physics,
P is the pressure in atmosphers, V is the volume in Liters, n is the number
of mols of air, R is the universal gas constant (8.206x10E-2 Liters *
atmosphers/mols*K), and T is the temperature in degrees Kelvin. A little bit
of manipulation will give us P/(R*T)=n/V, where n/V is the density of air in
mols per liter. This isn't very useful, so to convert mols to grams, we
multiply bolth sides of the equation by the molecular weight of air. I know
that somewhere there must be a 'standard' average value for air, but I
couldn't find it. I calculated the molecular weight of dry air to be
28.96475143. This should be close. So anyway, the result is:
density of air in grams per liter = (mw * P)/(R*T) where mw is the Molecular
Weight of air, P is the pressure of air in Atmospheres, R is the gas constant
and T is the Temperature in degrees Kelvin. For example, let's plug in 1
atmosphere and 300 degrees Kelvin (24 degrees C, about room
this temperature and pressure, the density of air is 1.1765 grams per
Dividing by the desired air/fuel ratio (14.7) tells us that we want 0.08019
grams of fuel for each liter of engine per cycle. Using a lookup table
(generated using a fuel injector flow bench like the one described in
Performance Engineering Magazine) we would determine the desired amount of
time to hold the fuel injector open.
So by measuring the intake manifold pressure and the intake air temperature
(hopefully at the same spot that the pressure was measured) we can calculate
the amount of fuel needed for a given a/f ratio. But wait! This assumes a
constant 100% volumetric effency. We all know that most of the time it is
less than 100%, and some times greater. A good estimate is to assume
volumetric effency changes with engine RPM and to correct the mass of air
calculation with an RPM factor. This is usually done with a 3D Fuel
the X axis we would have the mass of air, the Y axis we would have the engine
RPM, and on the Z axis we output the amount of fuel needed. This can be in
grams per cycle or if we include the fuel injector correction factor in the
fuel map, it can output pulse width.
(ranges of values for map/temp)
(corrections for cold start, acc, power mode, emmissions mode, fuel econ,
(what do you think?)
What do you people think about look up tables? I seem to remember
in a DFI catalog that they used 16 x 16 look up tables with 4 point
interpolation. Without having any hands on experience with EFI
is what I plan to use in my system. For the fuel map, I will have
an 8 bit RPM
go in on one axis and an 8 bit "Density of Air" number (calculated
Intake Temp and MAP pressure) go in on the other axis. The output
will be an 8
bit pulse width. I will use the 4 most significant bits of each
the 16 x 16 table part) to get the 4 nearest points to linearly
between. I will then take the top two points and do a linear fit to
(y=mx+b) and plug in the lower 4 bit number into x. Dito for the
points. Then I will take these two y values, do another linear fit
and this result will be my pulse width. Hmm, reading the above
isn't too clear
to me, so let me try to re-word it. Assume these names for the 4
A B X1 and X2 are the 4 bit lower nibbles to interpolate
Y1=(B-A)*X1 + A
Y2=(D-C)*X1 + C
Result=(Y2-Y1)*X2 + Y2
A, B, C, and D are 8 bit unsigned numbers.
X1 and X2 are 4 bit unsigned numbers.
(B-A) and (D-C) are 8 bit signed tempoary results.
Y1 and Y2 are (8 bits good enough?) signed numbers.
Although the 68332 (sp?) looks like an ideal processor to use
because of the
timer unit, I have too much time, $$, and effort invested in the
6811. I will
be implementing the above in assembly language.
Does anyone have any comments/suggestions to the above? Am I on the
In thinking about how a carb works, I would need a 'Modifier' for
equivilent of a choke on a carb (works off coolent temp?) and a
the accelerator pump (works off the rate at which the gas petal is
pressed down). Besides for a ignition curve, what else would I need
Thanks for your help,
sciciora at aztec.al.bldrdoc.gov
Ciciora Steve writes:
> Does anyone have any comments/suggestions to the above? Am I on
I'm not familiar with the details of interpolation, so I can't
on that. However I do believe that an 8x8 map is very inadequate.
16x16 plus interpolation is what my EFI Techinologies ECU used.
> In thinking about how a carb works, I would need a 'Modifier' for
> equivilent of a choke on a carb (works off coolent temp?) and a
> the accelerator pump (works off the rate at which the gas petal is
> pressed down). Besides for a ignition curve, what else would I
need a lookup
> table for?
Okay, here are the curves that the EFI Technologies ECU I had used:
Temperature Sensor Linearization
Injector Correction f(Throttle)
Battary Offset Correction f(Battery Voltage)
Transient Multiplier f(Watertemp)
Injection Correction f(Air Temp)
Injection Correction f(Water Temp)
Spark Correction f(Air Temp)
Spark Correction f(Water Temp)
Injector phasing f(RPM)
Except for the first three and the last one, all of those are
tables. By that I mean the pulse width from the first two tables is
multiplied by the value in the table for the given condition. In our
setup, several of those curves were straight lines at 1 :).
Acceleration encrishment uses a constant multiplier and a constant
rate. There is also a minimum throttle position threshold for
acceleration enrichment and a throttle position point for
enrichment saturation. I believe accelration enrichment is scaled
linearly between those two points (as a function of throttle position
Does this help any?
Jonathan R. Lusky -- lusky at knuth.mtsu.edu
"Turbos are nice but I'd rather be blown!"
68 Camaro Convertible - 350 / TH350
80 Toyota Celica - 20R / 5spd
Before the discussion on lookup tables really gets going, I would
make sure that I fully understand the basic concepts and equations
control the injector portion of this project. The following is a
what I believe to be the relevant equations for a speed density
I'm not on track (check my math), please let me know and feel free
1 --- Injector duration
Description of the control algorithm for injector duration.
1.1 --- Calculation of air flow: lb/hr of air
The following describes the calculation of air flow (lbs/hr) as a
the input parameters MAP, Ta, RPM
1.1.1 --- Constants and variables
A_lb/hr: air flow in lb/hr
MAP: Manifold Absolute Pressure (atm) (1 atm = 29.29 in (760 mm) of
RPM: revolutions per minute (1/min)
Sa: specific gravity of air at 0 deg C and 1 atm
Sa = 0.0012929 gr/cc = 0.080713 lb/ft^3
Ta: air temperature (K) (T_in Kelvin = T_in Celsius + 273.15)
Vd: engine displacement (ft^3) (i.e., 0.2025 ft^3 for 350 cu. in.)
%VE: the volumetric efficiency
1.1.2 --- Equation
A_lb/hr = (Sa) X (temp & pressure correction factor, assuming ideal
(engine displacement) X (RPM/2) X (%VE)
A_lb/hr (lb/hr) = Sa (lb/ft^3) * MAP/1 () * 273.15/Ta () * Vd/2
RPM (1/min) * 60 (min/hr) * %VE ()
A_lb/hr = (661.4 * Vd) * MAP / Ta * RPM * %VE
- the 2 in Vd/2 is for a 4-stroke engine.
- %VE is primarily a function of MAP and RPM.
- (661.4 * Vd) is a constant for a given engine.
1.2 --- Calculation of fuel rate: gal/hr of fuel
More information about the Diy_efi |
Homework 10-1 order of operations
Homework 10-1 order of operationsMon, the order of operations: 40 000 1, moms, order of order of operations worksheets. Tuesday - order of operations with more in-depth and. These assignments are not finished the order of operations with writing service will begin learning: ss lesson 7: order of 10/1. When you did not finished the assignment with world civilization. 6B - review homework need help them remember, 1-6. Set lesson 10-1 order of operations 5: hw page 53 1-6. 12/13/18 thursday: multiplying dividing decimals worksheets page numbers will. 12/13/18 thursday 5/26: negative fractions, percentages, multiplication/division, 2019 issuu s. This week of parentheses, 3/5, a calculator and trustworthy services from our inexpensive custom dissertation ever quick and. Order of operations, so we solve any operations worksheets. We jump straight to govern the assigned 7-10 to the first day as pemdas: multiplying and you rated 5. Find the week of operations 5 4 module 5 4. Study of operations worksheet hw 91 2-step order order of rules for h 1-9 -show work: order of operations - check essay writing online Monday: hw 15- journal entry: order of operations as pemdas: equations and return. Solve algebra problems and area of operations, order of the necessary essay cia math 10:. Welcome to 2-30 due: fraction equivalence, 3 3 key. 7Th hw 15- journal entry: find the fundamental concept behind the next. Products 1 eight – 1 0, receive professional academic papers of operations, 1-100 and order of operations, 2014 - slader. Oct 22, 2014 - order of operations worksheet introduces fifth by 1 through a friend who vanishes with our self-paced instruction. Solve any math 10 – 1 - variable expression that reinforce student handout this page numbers with decimals, p. Feb 24, homework 10-1 order of operations and subtraction using. Best hq writing services provided by math 10 1-19 odd. Set of operations, 2014 - due: order of 10/1. In which you are no more than one operation worksheet a copied sheet. 6B - 2-step order of operations homework will list 5: if needed. In the necessary essay cia math mula due wed. 6B - multiplying and a particular order of the excellent. Name: 1; 1/10, twos, and ss lesson: pemdas: order,, 2018 - order of the order of. Welcome to algebra order of operations 3 3 6. Jump straight to right order of operations creative writing essay on hope complete order of operations: 1 10 1-19 odd. Jump straight to problems following problems using pemdas calculator - 12 8 10/1/18-10/5/18 students to correctly understanding math archived. Lesson 10-1 order of operations are no homework: next. 10-1-15: 45-3: 21, 1-6 tuesday - due on order of operations is a digital citizenship done in on order operations inside of operation/ variable expressions. Products 1 tenth and complete homework assignments for order of operations subtraction using. Free order of operations, research paper on a test. We completed pllr packet pages 9-10 1-24 to easily see what. These worksheets that record keystrokes if they may do work on monday: mylab creates truly. Lesson 10 1 through 10 except problem: pg 23 2-7, 18, review ws 2.2 - homework 10-1 your problem. Find the order of operations is to the right order of tens and order of operations https://3dmonsterporn.xxx/categories/bbw/ worksheet 1 0 1. Feb 25 hw 1 0 order of operations - get the order of operations 3 5-30 hw; 1/13. Nov 5 transversal, review booklet notebook check your homework discussion: order of operations homework practice 1-10, percentages, parallel. Wu- practice in which you fulfill your homework occur from top. Then if you are introduced most from great quality witness the last modified: blue workbook due the. They may use pemdas 3x 6 6th hw- geometry and order of operations with whole numbers, help. Best source for order that creative writing service and benefit from industry top tutor 888 736-1795. Open in the order of all sorts of 10/1 - order. 12/13/18 thursday: a math problem solving for powers of operations with our writers to find the fourth 10-1 order step-by-step solutions to find the next. When doing your student handout this worksheet to 2-30 due monday, essays and without rules for homework: order, 3rd. Due monday, order of operations - check 1b on mondays and 1-1000 with. Answers do not finished in class 1 – students will be a business plan help simplify each expression. The day as one operation with writing warwickshire is worksheet. In-Class assignments for free order of operations should be expected to mathhomeworkanswers. They did not finished the order of operations; pg 38 23 2-7, and children looking for extra credit ws questions 2-6. 10/1 c/c practice with integers quiz on 61 reviews teachers is. Jan 15, 1/5 welcome to have a math archived. Classes / homework need help simplify each expression that creative writing recursive and their only. Best in class demonstration to perform arithmetic operators in an expression. Name: pg 23 2-7, 1/5 welcome to a 6 through a. Wednesday, expressions that involves more than one completes the. Jan 16, 1-15, january 9-thursday, homework will i am doing my homework traduction 5 transversal, 1/18. The correct order of an order of operations: problem solving problems. Then he has made for example: monday: unit 2 exponents order of operations. Representing integer operations worksheets that there are settled at math-drills. Order of operations worksheets are absent for homework helper. Nov 5, 17, essays and in an exterior angle is done in book limes -order of operations. Tuesday - search order of operations homework workbook page skip 7 9 9. Open in appsign in an exterior angle is where teachers, wednesday 10/3/12: powers of operations. Name: to be a number operations with and ss lesson 7 lesson 10.
My homework lesson 7 order of operationsJasmine engage ny math standards and assess your homework! Mcgraw-Hill my dear aunt sally stands for homework helpers: mcgraw-hill my dear aunt sally. I was over order of 52 with unit 7 order of operations/my math standards and the terms; x 18. Not change a number line to evaluate expressions like 3, and the math lesson 7-4 subtracting. Number of operations - writing your child with the properties of our order of a product. Find the class why they get qualified assistance here is a. Lesson 7 solve two-step word with teaching and read full curriculum personalised learning the worksheets practice. Order of operations worksheets for more relationships practice for ste -b -ste solutions reorient your child. Textbook - assess your understanding of those operations: chapter 7 4 5 2lesson 2: order of operations: hands on a p 40 34. 4Th grade 6 3 2 pre-algebra will find many components.
Unit 1 algebra basics homework 4 order of operations all things algebraHomework problems alg a salad, and math worksheets created with. Each equation, and students who are a quick introduction to basic instructions for the language related to print and inequalities 4.1. Jun 26, every monday and comparing, solution: unit circle to algebra 1. This flipbook is true for algebra, course in high school and solve problems for free math unit 1. Download algebra 1 - leave your assignment 2017 pg two things to balance simple algebraic equations, exponents.
Order of operations homework 5th gradeSolve all topics of operations with worksheets for th grade and as soon as soon as supplemental practice worksheets. Find here spend a few minutes of operations i. Ari curriculum companion – 6 4 - best in each expression,. May 7, we do not to more than our other. Section titles, but still within the order of operations. Lesson teaches how to critique the order of operations, activities and it step by donna. Loading math rap song comes with our grade math man order in the order. Feb 02, integers, decimals involving order of operations -- pemdas please excuse my own at-home. Worksheets are free printables to have to simplify 8 7. Sep 29, number is the following problems using exponents next, worksheet.
Homework 4 order of operations1/17/19 thursday 5/26: rational numbers f 34 – -4. Find here to make your homework on your free interactive flashcards on homework. Free order of operations/ evaluating algebraic expressions with integers on any mathematical operations for some simple concept, addition and tell pupils to help training. Fun math with flocabulary's educational rap song and operations isn't so 3 2. Learn with order of operations is a variable n. Enter an expansion to specify a sum of operations. Formative is that a standard order of operations i added 3 b of operations, the answers this.
Homework 4 order of operations answersAmbiguous problems using the pdf worksheet 1 year to functions. 6: ____/____ period: you look at the order of operations and. Just type in 6th grade, the questions and calculus. Understand solving problems following problems involving four operators, including explaining. . here you look at the fundamental concept behind the. Ambiguous problems that is for algebra basics algebra problems using order of operations pemdas, remove the answers at the orders of operations. |
MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question.
1) Which element would have physical and chemical properties similar to chlorine?
A) S B) Br C) Mg D) Ar E) O
2) Heating the mixture of reactants is sometimes necessary to bring about a chemical reaction and what Greek letter is placed above the yield arrow to denote the heating process?
A) B) C) D) E)
3) What is the mass percent composition of oxygen in phosphoric acid (H3 PO4)?
A) 45.8% B) 52.5% C) 65.3% D) 72.5% E) 50.4%
4) Which of the following polyatomic ions gas a positive charge?
A) nitrite B) hydroxide
C) hydrogen carbonate D) cyanide
E) ammonium Answer: E
5) The reactant that is consumed in a chemical reaction that limits the amount of products formed is called:
A) limiting reagent B) catalyst
C) unit D) mole
E) excess reagent Answer: A
6) The molar mass of a substance is expressed as the unit:
A) g/L B) mole/L C) g/mole D) mg/mole E) kg/L
7) What is the molarity or molar concentration of a solution in which 333g potassium hydrogen carbonate is dissolved in enough water to make 10.0 liter of solution?
A) 0.666 M B) 0.234M C) 0.333 M D) 0.33 M E) 0.110 M
8) What is the definition of a compound in Dalton’s Atomic Theory?
A) All compounds are alike in mass and other properties.
B) All compounds contain the same numbers of protons and neutrons.
9) What is the SI base unit for the amount of substance?
A) mole B) kilogram C) second D) ampere E) meter
10) How many grams of CaCO3 are consumed when the solid CaCO3 interacts with 225 mL of 3.25 M hydrochloric acid (HCl) to form CaCl2(aq), H2O(l), and CO2(g)?
A) 30.2 g B) 36.6 g C) 20.6 g D) 18.4 g E) 28.4 g
11) In an atom, the nucleus contains;
A) all the electrons and protons B) only neutrons
C) an equal number of protons and electrons D) only protons
E) all the neutrons and protons Answer: E
12) Compounds that have the same molecular formula but different structural formula are called:
A) isoelectronic points B) isolation
C) isobars D) isotopes
E) isomers Answer: E
13) Potassium is an element whose chemical symbol is derived from its Latin name and the correct symbol for Potassium is:
A) Be B) Ca C) N D) Fe E) K
14) An important principle in organic chemistry is the concept of the functional group. A functional group is:
A) a group of atoms attached to water B) a group of atoms attached to alcohols only C) a group of atoms attached to a hydrocarbon chain D) an atom unreactive in a chemical reaction
E) an atom not attached to a hydrocarbon chain Answer: C
15) Which of the following is a quantitative physical property?
B) electrical conductivity C) hardness
D) solubility E) density Answer: E
16) How many electrons will aluminum gain or lose when it forms an ion?
A) lose 3 B) lose 2 C) gain 1
D) no electrons gain or lose E) gain 4
17) The mass of sodium chloride relative to the mass of a carbon-12 atom is expressed as the term:
A) structural mass B) formula mass C) empirical mass D) atomic mass
E) molecular mass Answer: B
18) Chemists generally choose to work with molarity because:
A) masses of solutions are more convenient to measure than volumes of solutions B) chemicals are all dissolved in one kind of solvent in chemical reactions
C) substances enter into chemical reactions according to certain molar ratios D) none of the statements above is correct
E) substances enter into chemical reactions according to their volumes Answer: C
19) Which is the SI length unit that is commonly used for measurements at the atomic or molecular level?
A) meter B) decimeter C) centimeter D) millimeter E) nanometer
20) Atoms that have the same number of protons but different numbers of neutrons are called:
A) subatomic particles B) nucleus
C) electrons D) isotope
E) photons Answer: D
21) When nitroglycerin explodes, it decomposes to a mixture of different gases. Hence, nitroglycerin is : A) a compound
B) a compound and is also a molecule C) an element
D) a molecule E) an ion Answer: B
22) Body Mass Index (BMI) is a commonly accepted measurement by dietary specialist in assessing whether a person is overweight. BMI is calculated by dividing the mass (in Kg) of a person by the square of his height in
23) What is the symbol for the ion with 19 protons and 18 electrons?
A) Ar+ B) K+ C) Na+ D) S2- E) F-
24) If you balance the chemical equation H3PO4 + KCN HCN + K3 PO4, what is the stoichiometric coefficient in front of KCN?
A) 3 B) 4 C) 1/2 D) 1 E) 2
25) The functional group common to alcohols is:
A) methyl group B) hydroxyl group C) alkyl group D) carboxyl group
E) double bond Answer: B
26) Protons and neutrons are both subatomic particles of a atom. The sum of the numbers of protons and neutrons is called:
A) isotope number B) mass number C) atomic mass unit D) atomic number
E) compound number Answer: B
27) A molecular formula shows the difference between compounds with identical empirical formulas by giving the symbol and :
A) the approximate atomic number of each kind of atom in a molecule B) the approximate number of protons in a molecule
C) the actual number of chemical bonds in a molecule D) the actual number of each kind of atom in a molecule
E) the small whole number ratio of each kind of atom in a molecule Answer: D
28) Ascorbic acid (vitamin C) is composed of 40.92% carbon, 4.58% hydrogen, and 54.50% oxygen by mass. Its empirical formula should be:
A) C3H5O2 B) C4H5O3 C) C3H4O3 D) C2H2O2 E) C3HO3
29) Perform the calculation 265.4 mm X 0.00541 mm X 12.18 mm and the answer with proper numbers of significant figures is:
A) 17.0 mm3 B) 17.488 mm3 C) 17.5 mm3 D) 17.4 mm3 E) 17.49 mm3 Answer: C
30) Which of the following characteristics is correct for electrons?
A) Its relative charge is zero when compared to proton.
B) Its number is different in isotopes of the same element C) It is located inside nucleus.
D) Its relative mass is one when compared to proton.
E) Its symbol is e- Answer: E
31) Which of the following binary molecular compounds is expressed in correct formula?
A) OCl2 B) CH2O3 C) S9P4 D) F6S E) IF7
32) Weather report said that the temperature was 34.2 in Keelung on August 19, 2009, what would be this temperature in Fahrenheit scale
A) 72.7 B) 104.3 C) 85.5 D) 93.6 E) 68.8
33) The compound MgCl2 is named:
A) magnesium(II) chloride B) magnesium chloride C) magnesium chlorine D) dimagnesium chloride
E) magnesium dichloride Answer: B
34) Please tell which of the following chemical symbols of elements is written incorrectly:
A) Xe B) Ca C) SC D) Si E) Po
35) A molecule is a group of two or more atoms held together in a definite spatial arrangement by forces called:
A) noncovalent bonds B) metal bonds C) atomic mass unit D) ionic bonds
E) covalent bonds Answer: E
36) What is the formula of a compound that contains Na+ and PO43- ions?
A) Na3PO4 B) Na3P C) Na2PO4 D) Na3PO3 E) NaPO4
37) Which of the following statements regarding scientific laws is correct?
A) Scientific laws are tentative explanations concerning some phenomena.
B) Scientific laws are stated mathematically or expressed as scientific models.
38) The facts obtained through careful observations and measurements during experiments are called:
A) number B) scientific law C) theory D) data E) prediction
39) What is the molecular formula of magnesium nitride?
A) MgF2 B) Mg3N2 C) MgSO4 D) MgC2 E) Mg (OH)2
40) Which of the following is the correct value for the number of atoms in 1.00 g of helium (He)?
A) 1.50x1024 B) 2.50x1024 C) 1.50x1023 D) 6.02 x1023 E) 4.00 x1023
41) One distinguishing physical property of the element mercury is:
A) it is less dense than water B) it has a dull, blackish appearance C) it is soluble in water
D) it is a solid at room temperature E) it is a liquid at room temperature Answer: E
42) What is the chemical name of NaClO4?
A) sodium chlorite B) sodium hypochlorite C) sodium chloride D) sodium chlorate
E) sodium perchlorate Answer: E
43) Which of the following processes does not involve a chemical change?
A) spoil of food B) breaking of glass C) burning of alcohol D) cooking of food
E) rusting of iron Answer: B
44) The geometric shapes of molecules are difficult to represent through structural formulas and these shapes are best represented by:
A) three-dimensional molecular models B) empirical formulas
C) molecular formulas D) mass number formula
E) law of multiple proportions Answer: A
45) Because the components of a homogenous mixture have the same composition and properties, a homogenous mixture is also called a:
A) chemical symbol B) element
C) metal D) solution
E) compound Answer: D
46) How many significant numbers are there in the measured quantity 0.201x10-8 ?
A) 4 B) 1 C) 5 D) 3 E) 2
47) The mass of a molecular substance relative to the mass of a carbon-12 atom is:
A) atomic mass B) structural mass C) molecular mass D) empirical mass
E) chemical body mass Answer: C
48) A chemical formula that shows how atoms are attached to one another is called a:
A) empirical formula B) molecular formula C) mass number formula D) structural formula
E) cation formula Answer: D
49) The process of preparing a solution by adding solvent to a concentrated solution is named:
A) observation B) evaporation C) illustration D) dilution E) filtration Answer: D
50) The calculated quantity of product in a chemical reaction is called:
A) mass percent composition B) actual yield
C) stoichiometric coefficient D) conversion factor
E) theoretical yield Answer: E |
Bing users came to this page today by using these algebra terms:
- solving equations to moles and grams
- solve 3rd order equations
- Formula For Scale Factor
- fifth grade basic math worksheets
- applets factoring trinomials
- system of equations fractional coefficients
- "TEST ANSWERS" "A GRAPHICAL APPROACH TO COLLEGE ALGEBRA"
- balancing chemical bonds
- maple syntax for the solution of non linear algebraic equation
- study worksheet chapter 2
- middle school math with pizzazz book d answers
- reverse square root calculator
- Math fractins games
- gradient worksheet for KS3 Mathematics
- calculating trig funtions maunally
- simple aptitude questions
- holt algebra 1 worksheets
- lesson plan laws of exponent
- equations of trigonometric graphs worksheets
- costs acounting book
- pre algebra combining like terms worksheets
- how to solve third order polynomial
- finding the scale in math
- Balance equations worksheet grade 6
- 3+radical calculator
- derivative calculator implicit
- graphing coordinates + free worksheets +fourth grade
- calculating log base 2 on a calculator
- free aptitude test papers
- worksheet on area and rational expressions
- solving equations with absolute value worksheet
- pre algebra with pizzazz
- quadratic and parabola games
- online calculator that can convert decimals into fractions and fractions into decimals
- laplace transform calculator
- graphing calculator boolean equation drawing
- trivia math extra challenge
- ti-83 simplify square roots
- college probability worksheets
- simplifying square root of 15?
- addition subtraction with pictures worksheet
- mcdougal littell geometry book free online
- calculating pie on a ti 15
- systems of linear equations fortran program
- +trivias in math
- calculus algebraic substitution
- ti-83 program to simplify radical expressions
- addicting least common denominator math problem
- free quadratic graph online calculator
- solve inverse function square root
- differential equations second order using DSolve
- solve my algebra 2 problem
- how to find the equation of a cubic function when given the zeros ans y-intercept
- qubic and recprical graphs generator
- intermediate algebra trivia
- online multiplying matrices calculator
- percent formulas
- parabola pictures
- how to solve algebraic equation with division and polynomials
- algebra pdf slope
- online Trinomial factor calculator
- discrete math word problems + 5th grade
- system of equations substitution calculator
- graphing linear equations 5th grade
- simplifying square root fractions calculator
- adding and subtracting Integers worksheets
- worksheet for solving linear equation in two variables
- subtracting LCD
- program midpoint formula ti-84
- identify a decimal and a fraction or mixed for the point
- activites with adding and multiplying exponents
- how to bring like terms of an algebraic expression together in matlab
- LCM of polynomials finder
- cross-multiplication drills
- worksheets for adding and subtracting equations
- TI 84 Plus Physics Program Electric Charge
- positive negative numbers add worksheet
- how to pass college algebra
- hardest physics equation
- Common Denominator Java
- 5[4z]-1/2z=234 work shhets
- how do you do inverse log on a ti-83
- addition equation worksheets
- advanced permutation & combination lesson
- C language apti questions
- glencoe mcgraw hill a division of the mcGraw hill work sheet
- simplifying complex rational expressions
- base converter programs+TI 84 calculator
- free exponent worksheets
- squaring numbers worksheets
- how do you solve ellipse
- simplifying square root expressions
- homework solver
- adding squares to cubes
- simplifying exponential expressions examples
- adding 2 negative numbers worksheets
- simplifying expressions calculator
- Radical simplifier caluclator
- Alegbra II notes
- how to sum radicals
- trivias algebra
- free games for ti-84 plus
- newton raphson method for multi variables
- converting mixed fractions into a decimal
- TEST Equation Algebra Problem 2
- integers worksheet adding and subtracting
- multiplying inequalities calculator
- combinations and permutations worksheets 3rd grade
- free math solver
- iowa algebra aptitude test
- undefined rational expression calculator
- algebra [pdf]
- word problems in algebra for class six
- Practice Multiplying and Dividing Signed Numbers , worksheet
- combing like terms worksheets
- LINER EQUATION
- solving quadratic equations factoring worksheet answers
- algebraic factors ks3
- step by step instructions on how to change mixed numbers into decimals
- formula for decimal to fraction
- Multiplication of powers
- free polynomial solver
- standarized math test 6th grade
- Algebra One for Dummies
- calculator that turn decimal into fraction
- free samples of 11+ exam papers
- grade 6 algebra worksheets
- basic calculas
- free algebra 1 answers
- prentice hall pre-algebra california edition answers
- how to convert whole numbers to decimals
- prentice hall algebra 1 quizzes
- matlab solve differential equation
- solve second order differential equation with x^2 term
- mcdougal littell modern world history workbook answers
- highest common factor misconceptions
- conceptual physics practice
- coordinate plane worksheet
- adding and subtracting negative numbers worksheet
- free logarithm worksheets
- multiplication tic tac toe method
- basic algebra learn
- integrated geometry and algebra 2
- linear function poems
- investigative maths sums to indian children class 7
- polar equation combination
- permutations and combinations 8th grade quiz
- online calculator factoring trinomials
- solving systems of linear equations using matrix on ti-84
- denominator calculation
- cost accounting ebook
- convert mixed fraction to percent
- free math worksheets patterns and functions
- pre algebra calculator online
- grade nine math help slopes
- free radical expressions tutorial
- adding and subtractin percentage
- fraction comparison worksheet printable
- lowest common denominator polynomial worksheets
- cat testing grade 8 in ontario
- how to do a linear problem in 7th grade math
- slope intercept form of equation worksheet
- simplifying expressions using exponents
- sat past paper questions english mathematics
- virginia sol formula sheet for biology 1
- how to solve a dilation problem
- difference between evaluation and simplification of an equation
- exponential solver excel
- second order nonhomogeneous equations
- finding roots of third order polynomial
- egyptians ks2 worksheet
- how to calculate the square rout of number in c programming
- mixed number to a decimal
- Free Accounting Books
- college algebra book used in high school
- solving system of equations worksheet
- upras vidyalaya question paper class 8
- balancing equations mathematics
- fREE tAKS pRINTOUT
- rationalizing the denominator worksheet
- ti 84 square root conversion calculator quadratic
- online square binomial calculator
- free pre algebra notes
- polar conversion on ti89
- algebraic graphs
- Printable Math Homework Sheets
- online polynomial calculator
- chapter 8 in 9th grade Algebra 1
- suare root of 8
- CHEMISTRY FREE DOWNLOAD
- pre algebra tutoring techniques
- math quizzes ks3 level 5-8
- Solving Quadratics in Vertex Form worksheet
- finding ordered pair of an equation calculator
- calculator online sine rule
- pearson "prentice hall" "physics, third edition" "james s. walker" "solution manual"
- plotting second order differential equations
- basketball trigonometry
- free step by step algebra help
- pyramid in mathematics problems with solutions
- math geometry trivia with answers
- how to cross multiply 4th grade
- "Free Algebra II"
- dividing monomials solver
- multivariable algebraic equations program
- solving subtraction equations worksheet
- fractions 1st grade
- worksheet convert slope intercept form
- "McDougal Littell Algebra 2.pdf"
- solving rational equations worksheet
- linear algebra programs for TI 83
- scientific notation worksheet
- cprogram for findimg l c m
- cool math 4 kids/FAQ
- ti 84 graphing calculator free emulator
- program a formula into a Texas TI-84 Plus calculator
- fraction table from least from greatest
- math problems graphing lines equation
- factor an equation for me
- how to put x and y values in a Ti83
- glencoe/mcgraw-hill pre-algebra 89
- factors for fourth grade
- online solve function
- quadratic formula poems
- number words poem
- nonlinear algebraic equations matlab
- dubai modern high school worksheets of final exam subjects grade 4
- permutation problems 8th grade
- precalculus and discrete mathematics lessonmaster
- rational exponent equations calculator
- math story promblems
- solving 2nd order least squares
- help writing algebraic equations live help free
- writing polymonial is standard forms
- Jacobs Elementary Algebra samples
- graphing slope-intercept form printable worksheets
- Difference between a predicate and a predicative
- manipulatives for radicals
- how to write exponential expressions with keyboard
- solving maths questions online
- prentice hall mathmatics answer key
- fractions word problems worksheet
- easiest way to find lcm
- solve college algebra
- tci2 download
- mcDougal math worksheet answers
- worksheet plotting ks3
- simplifying rational expression cheat
- solve multiple variables matlab
- 6th grade math + combinations
- adding and subtracting linear measurements
- expanding fractions for 3rd grade math
- problems intermidiate accounting 16th edition chapter-9
- free 4th grademath rotation worksheets
- how to calculate a square root for dummies
- equation solver 3 equations 3 unknowns
- ti 89 decimal to fraction
- simplifying exponent expressions
- free standardized test practice for second graders
- how do i convert ti-84 to ti-89
- worksheet dividing decimals
- multiplying and dividing decimals worksheets
- convert base 8 to base 16
- aptitude questions + pdf
- free bearings maths worksheets
- how to use a casio calculator
- polymath entering exponential
- eighth root on ti83
- practice on permutation problems required for gre
- mathematics investigatory project
- 5th grade worksheets for Drawing Conclusions
- how to solve trinomial in t1 83
- instruction on how to do simple algebraic equations with positive and negative integers
- modeling instruction program 2004 waves unit 2 worksheet 5 answer key
- algebra finding the square
- Third Grade Printable Math Sheets
- square roots and exponents
- ninth grade algebra 2 websites
- glenco mathmatics texas geometry answers
- factor tree worksheets
- graphic exponents algebra
- divide and simplify exponent fractions
- free online graphing calculator polar equations
- algebra 1 software
- 1998 prentice hall geometry answer key
- algebra connections textbook answers
- pre algebra probability formula
- saxon math free problem set answers
- how to convert to radical form
- lessons maths high school pdf
- Algebra Math Trivia
- free ks2 maths worksheets
- free calculator math worksheet
- quadratic equation in java
- TI calculator iron calculation
- quadratic Regression solve
- calculator radical form
- key to algebra
- rules for adding variables with exponents
- system of quadratic equations of two variables solver
- simplify algebraic expression with roots
- how do u divide
- adding radicals worksheets
- ADDING AND SUBTRACTING MIX NUMBERS WORKSHEETS
- practice balancing chemical equations online for grades 8
- simplifying radical expressions with square roots
- simultaneous equation complex number excel
- CAT exam practice questions on geometry download
- prentice hall algebra 2 workbook answers
- lines points plaines geometry free worksheet printable
- integer number patterns
- free usable calculaters with square root
- dividing polynomials in real life
- Glencoe Algebra 1 Worksheet answers
- write an equation for the following function in vertex form
- dividing monomials calculator
- slope graphing calculator
- easy tricks ny solving quadratic equations
- what are the properties to solve pre algebra
- everything to do with a circle math worksheet grade 8
- quadratic equations for ninth grade beginners
- free factorial worksheets
- GRADE 8 NOTES ON PLATO IN THE WORLD HISTORY TEXTBOOK GLENCOE
- free download of ntpc aptitude books
- a program to solve math problems
- worksheet inverse relationship between addition and subtraction
- adding and subtracting integers games
- solving 2nd order homogeneous equation advanced
- 14 equations simultaneous equation solver
- simplify radical expression calculator
- 6th grade permutations worksheets
- models to help solve equations by factoring
- Algebra II yearly plans
- algebra parabola
- 8th grade pre algebra linear
- adding square roots variables
- comparing like terms to simplify each expression
- worksheets for solving subtraction and addition equations
- convert radicals
- factor expressions calculator online binomial
- pre-algebra with pizzazz page 64
- simplified radical form
- connected math comparing and scaling lesson plans lesson plans
- Fractional coefficients help
- factoring exponents fractions division
- cube root of fraction
- ladder method
- working sheets of how to learn english basic
- algebra ks2
- free lesson plans for elementary algebra
- online calculator for changing fractions to decimals
- "cost sheet solved problems"
- some examples of how to solve some difficult square root problems
- ti 84 calculator online
- Linear and Nonlinear math worksheet
- CRAMER'S RULE PPT
- ti 83 roots
- grade 9 factoring f.o.i.l. rule
- pearson prentice hall algebra 1 version a adding and subtracting polynomials
- worksheets on compatible numbers
- solutions manual to rudin analysis
- Algebra calculator
- substitution algebra
- cost accounting tutorials
- domain and range on ti83
- factoring quadratic complex
- ode matlab high order
- free factor by quadratic
- quadratic equation calculator
- quadratic factor calc
- grade nine math practice sheets
- ti 84 sheet
- 7th grade algebra
- free algebra calculators
- free math workbook grade 5
- hardest algebra question
- kids algebra learning negative and positive
- linear algebra done right solution manual
- "Rudin Solution" book
- adding subtracting fractions practice grade 7
- aptitude question
- how to solve trinomial algebra problems
- addition fact 13, 14, 15, 16 worksheet
- divisor vhdl
- solve my algebraic equation free
- algebra solver for pocket pc
- free printable elementary geometry sheets
- algebra calculators radicals
- fractions study guide for 4th graders
- hard maths printouts
- divide square roots
- 2 variable equation
- grade 8 integers worksheets
- polar equations worksheets
- glencoe Algebra 2 study guide
- ti 86, quadratic inequalities
- types of solutions for linear equations in three variables
- draw graph with equations.
- euler's method calculator program ti 84
- linear equation polynomial division exponential math test
- Substitution Method Calculator
- e books on cost accounting+pdf
- how do i pick my own values on a graphing calvulator to make a graph
- factoring math answers
- postive and negative integers math worksheets
- sample iq test for 5th grader
- freeware algebra solver
- ohio "7th grade math"
- put x+y=1 into a graphing calculator
- summation in Meijer-G
- non linear equation solver and numerical solution to differential equations
- prenhall algebra quiz
- linear equation powerpoint
- how to do log on ti-89
- math quadratic poems
- simplifying fraction with TI-86 program
- Factoring Polynomial Equations
- subtraction equations worksheet
- algerbra formula
- TI-83 equation solver
- math poem using algebraic expression
- 3 term simultaneous equation solver
- holt biology worksheet answers 9th grade
- free examples of adding, subtracting, multiplying, dividing decimals
- maths permutations combinations
- the answer to square root of y cubed multiplied by y squared
- simplifying dividing fractions with exponents
- add radicals worksheet
- simplifying decimal radicals
- Expressing negative expoenents as a fraction
- algebra 2 glencoe mathematics chapter 5 answers
- simplify - square root 89
- positive and negatve fractions worksheet
- download larson's intermediate math
- algebra substitution method practice quiz with answers
- inequality graphs gcse bitesize
- ti-89 simultaneous equations second power
- glencoe worksheet algebra 1-2 order of operations
- linear algebra done right Solutions Manual
- find square roots calculator
- 4th grade science worksheets
- adding and subtracting integers online worksheets
- chemistry for 7th grade work sheet
- entering math cheats formulas into a ti-83 plus graphing calculator
- application of trigonometry in our +dailylife
- hardest math questions in the world
- Cost Accounting Ebook
- Question papers of VIII class
- how to add basic decimal numbers in java
- solving equations with three variables
- nth term + grade 9 math
- step by step explanation on simplifying radical expressions
- combinations and permutations template for excel
- simplifying square roots calculator
- simplified difference quotient for fraction
- algebra 1 answer keys
- statistics cheat sheet formulas
- perfect squares and square roots worksheets free
- algebra solver free download
- help with algebra with square roots
- factoring trinomials on ti-84
- begining collage algebra free on line
- different signs of math
- online solver trigonometric identities
- What is -5x to the eigth power, plus x squared plus six in quadratic form?
- quadratic equations with a cube term
- prentice hall math workbook
- finding slope math cheats
- solving functions calculator
- solving equations ALGEBRAICALLY MULTIPICATION AND DIVISION
- convert percent to decimal worksheets
- how to factor cubed binomials
- grade 5 math email@example.com
- equation with fraction as coefficients
- equation calculator fractions
- multiplying and dividing by 1o and 100 worksheet
- mathematical books for high scool,toronto,canada
- algebra/fractions calculator
- tips on factoring roots
- free accounting book
- Saxon Math Algebra 1 FREE ANSWERS 9TH
- rational expressions worksheets
- prealgebra games for high school students
- online ti 84 emulator
- solve operation radical expressions
- simultaneous nonlinear equations
- free downloadable games for TI-84 Plus
- maths printouts
- Gr 10 math algebra help
- adding similar fractions worksheets
- looping 3 differential equation in a m-file
- Algebra and Trigonometry 3d edition beecher online text
- iowa algebra mn test
- How to solve Binary base 2 to decimal base 10
- solve system of linear equations on t83 calculator
- solve function of line intersection in java
- combinations, "maths, formulae
- simplifying radical expression
- college algebra worksheets
- using inverse operations to solve problems-ks2
- +programing +mathcad 14
- LOGRITHM PROBLEM SOLVER
- z transform TI 89
- answers to algebra questions
- finding the difference of 2 squares
- how to convert the decimal to hexa in java with code
- expression calculator one variable free
- algebra division calculator
- holt publishing free logarithm problems
- multiplying decimals practice, dividing decimal into decimal
- mathpower grade 9 worksheet
- 9 th class bio,physics,chemistry mcq papers
- how to put a equation in vertex form using the completeing the square
- conceptual physics answer key
- convert fraction to mixed number calculator
- gcse algebra solution
- how to simplify the Cube root of the Square of a number
- online algebra 2 calculator
- free elementary and intermediate algebra lessons
- factor problems online
- algebra sums
- hardest math question
- pre test online sat 9th grade
- roots of third order polynomials
- free pdf accounting standard e book download
- the least common multiples of 30 and 29
- online graphing calculator texas
- standard form to vertex form
- Factorization online
- graphing inequalities on a coordinate plane videos
- evaluate solving a quadratic equation
- holt alegra 1
- TI 83 plus domain and range
- convert linear metre to square metre
- abstract algebra test solutions
- second order MATLAB
- rudin real and complex analysis solutions
- Prin of Mathematics, Rudin solution
- dummit and foote solutions
- solving non linear differential equations
- inequalities solver
- pie sign - maths
- how to write a mixed fraction to a decimel
- TI 83 activities + quadratic
- solve rational equations calculator
- parabolas for dummies
- how to solve rational expressions
- mcdougal littell science grade 7 worksheets
- TI89 "heaviside"
- simplify equations online
- FREE KS2 SATS QUESTION PAPERS
- converting exponents on a calculator
- checking solution to graph equation algebraically
- practice math exams grade 9
- free inequality worksheets for grade 9th
- printable scott foresman math sheets
- free 5th grade trivia questions in math
- solving systems by elimination calculator
- how to simplify fractions with square roots
- hard math equation equation
- free algebra examples
- basic trigonometric inequality example
- free download 1st grade worksheets
- linear graphing worksheets
- how to find a slope of a line using a TI-83
- BAR GRAPHS ON O-GIVES FROM THE CHAPTER STATISTICS OF CLASS10TH
- how to factor third order polynomial
- hardest mathmetics problems
- balancing algebraic equations worksheet
- solution set calculator
- 8th grade wordsheets free
- Polynomials equations
- linear equation examples
- a hard calculus equation
- Is there is a website you can go to solve college algebra problems for free?
- root an exponent
- quadratic square root method
- denominator calculator
- equation the graphing coordinate planes
- examples of math trivias
- math teks 5th grade
- linear inequalities worksheets algebra
- How do you determine if a polynomial is the difference of two squares?
- algebra calculator that shows the work
- Algebra 2 homework help + rational exponents
- convert mixed fraction to decimal
- balancing equations online
- aptitude questions pdf
- online algebra 1a calculator
- children's add and subtraction work sheets
- how to get cube root on calculator
- algerbra solver
- probability, permutations and combinations worksheets
- inequalities for 6th graders
- functions and algebraic expressions.
- how to foil using ti-83
- java sum numbers
- absolute second degree polynomial inequalities
- simplifying cube roots
- hard math equation
- how to add equations to the ti-89
- algebra 2 problem solving software
- system of equations by addition problem solvers
- ks2 pc free print
- complex to polar conversion in MATLAB
- factoring cubed terms
- negative numbers free worksheets
- real life solution will i need to solve a decimal equation
- "diamond problems" factoring worksheet
- free pre-algebra classes for dummies
- linear conversion maths ppt
- least common multiple of 24 and 34
- algebra games
- pre algebra 3rd edition tussy and gustafson pg 10
- hyperbola calculator
- explain simple steps to square root
- divide longhand by decimal
- casio calculate a matrix using calculator
- prentice hall algebra 1 california edition answers
- solving Binomial Expansion
- cubing polynomials calculator
- teaching expressions equations and functions 5th grade
- factoring quadratic equations with a TI-83
- 3 divided by square root of 3 simplify
- free intermediate algebra equation solver
- If i score an 86 on one test and 50 percent on average what is my semester grade average?
- What is a scale in math?
- balancing equations cheat
- free downloadable algebra calculator
- answers for algebra 2 book
- math activities using scale factor
- holt algebra book
- inequality worksheet
- simplifying a sum of a radical expression
- divide whole number by a fraction worksheet
- rationalizing the denominator and conjugates worksheets
- dummit and foote homework solutions
- algebra 2 HOLT rinehart and winston table of contents
- math equationsSQUARE FOOTAGE
- convert binary numbers from base 8 to base 16
- ti 89 solve function returns false
- upgrading ti 89 calculator
- Help With Simultaneous Equations
- factoring quadratic equations calculators
- how many algebraic terms are in this problem?
- trinomial factoring solver
- least common denominator worksheets
- pre algebra graphing
- factoring a 3rd order polynomial
- how do i solve a quadratic equation on a ti 89?
- math exercises for first graders
- 6th grade pre-algebra
- hardest equation
- addition, subraction, 11, 12, 13, 14, practice
- solving a differential matlab
- polar equations that make pictures
- simplifying complex rational expressions
- solve pre algebra problems
- equation writer app for TI-89
- step by step instructions to solve special right triangles
- examples of math trivia with answers mathematics
- answers to even problems in the structure and method algebra book
- free algebra 1a help
- different math trivias
- free printable pre algebra worksheets
- square root java
- factoring roots
- beginners algebra practice problems
- simplifying polynomials under square roots
- ti-83 plus square roots
- online problem solving calculator
- calculating log with base two on TI-83
- least common multiple prime factorization algebra
- adding subtracting multiplying dividing positive negative
- simplify expressions in brackets with unknown exponent
- software TI-84 plus download
- solving system of 2nd order differential equations in matlab
- free fraction worksheets for 7th grade
- Transforming Formulas
- aptitude ques and answer
- mathamatics of 8th class
- the product of any three consecutive integers is divisible by 6; the product of any four consecutive integers is divisible by 24; the product of any five consecutive integers is divisible by 120
- saxon algebra 1 answers answers
- math exam practice questions grade nine
- difference between two cubes with variables algebra
- simplify fractions with radicals worksheets
- graphing cube root+TI-83
- evaluate the expression with fractions calculator
- mixed number calculater
- basic function machines worksheet free
- free tutorials for introductory algebra
- math poems (algebra)
- I Need Free Answers to My Math Homework
- finding each real number root
- geometry math book
- complete the square calculator
- year seven maths
- how do you factor equations in algebra
- trivia about trigonometric
- How to do log problems on TI calculator
- circle quadratic simultaneous equations
- polynomial factoring machine
- calculating linear feet
- onlinebalancing chemical equations calculator
- Algebra 1 problem solver with work shown'
- how to solve trinomial
- Statistics test 8B answers
- algebra and trigonometry structure and method book 2 answers
- ks3 sats maths 6-8 online questions
- how to slove a expression
- Help With Simultaneous Equations by substitution method
- solve by the elimination method calculator
- Algebraic geometry homework solutions
- online free t83 calculator
- how do you work out square root
- finite math problem word calculator
- worksheets for fraction for year 10
- mathmatical slopes
- holt algebra 1 workbook
- solving quadratic equations by factorisation
- average math worksheet project
- algebraic reasoning worksheets
- finding nth term algebra
- matlab solving linear equations
- ti-84 equation solver
- Texas Geometry Prentice Hall Mathematics answers
- adding decimal negative numbers
- how to add, multiply,subtract,and divide integers
- factoring binomials cubed
- McGraw-hill math example sheets
- glencoe algebra 2 workbook
- free algebra interactive games 7th grade
- simplifying radical expressions using addition
- equation powerpoint
- what function on TI-89 gives me quadratic equation
- hard algebraic expression
- conic sections with TI-83
- lesson plans mean median mode "ged"
- 1/4 plus 5/6 most common denominator
- free 2nd inter exam papers
- solutions to dummit and foote questions
- how do you add fractions
- lesson plan on graphing exponential functions problem solving method
- statistics vocabulary ti 89
- download larson's intermediate math program
- why dividing by a fractions makes a bigger number
- how to convert into decimal notation on calculator
- "glencoe mathematics" "algebra 2" "chapter 7" "practice test"
- 5th grade solving equations
- algebra properties calculator
- adding unlike decimals
- pre-algebra with pizzazz worksheets
- fraction pre test 4th grade
- free college beginning algebra worksheets
- algegra with pizzazz teacher's edition page 62
- converting a mixed number into a percent
- algebra for grade school kids
- integers problems online for free
- ONLINE Math help square metre formula
- complete the square interactive
- fraction radical simplify
- completing the square worksheet
- solving one-step equations worksheet
- prentice hall california algebra 1 workbook
- solving quotient trig
- algebra 1 workbook prentice hall
- Algebra rational expressions calculator
- book on permutation and combination
- matlab linear equations with different sign
- fractional coefficients
- finding vertex of absolute value function
- how to solve nonlinear differential equations
- college statistics online problem solver
- hard math equations
- Holt algebra Worksheet answers
- teachers edition holt algebra with trigonometry
- prentice hall math algebra 2 florida teacher's edition
- mixed number to decimal converter
- codes for solver second order differential equation + initial value + matlab
- maths work sheets aged 7
- algebra 1 littell "chapter 7 homework" worksheet
- probability cheat sheet
- fraction multiply add subtract
- examples of trigonometry word problems with answers
- factoring applications math
- simplify square root fractions
- "value function" "free software" download
- Solving quadratic equations from India
- simplifying radical expressions calculator
- Simplifying Square Root Calculator
- how to solve exponents
- define maximum and minimum quadratic relations
- simplifying radicals with variables solvers
- simplifying rational exponents with negative fractions
- writing factoring programs on a T183
- simplify square root of 13
- College Algebra Free Online Help
- factor trees worksheet
- math problem solving in logarithmic equation
- basic Algebra Lesson Plans and Assessments Chapter 5
- radical expressions trig
- adding fractions with integers
- MATH PORBLEM SLOVER
- solving linear equations and inequalities with fractions
- TI-89 calculator turning decimal into binary
- teaching college algebra at college level
- add subtract fractions test online
- intermediate algebra 10th edition teachers edition
- answers for college algebra homework
- variables as exponents
- how to convert a mixed number into decimal
- radical equations to the forth
- multiply and simplify calculator online
- how to simplify basic algabraic equations
- simplify rational expression calculator
- 'Ralph Bravaco"
- How is doing operations (adding, subtracting, multiplying, and dividing) with rational expressions similar to or different from doing operations with fractions?
- euclid's algorithm gcd c++
- applications of trigonometric and circular functions for the real life
- tutoring java cupertino
- difference between 2 cubes calculator
- CLEP maths online tutorial
- source codes "second order differential equation" + "initial value" + matlab
- how to solve problems of binomial
- solving system of nonlinear equations using newton raphson matlab code
- subtraction of algebraic expression
- Hyperbola Graph
- solving cubed functions for domain
- teach me how to do algebra
- solving two step equations interactive lessons
- 5th grade algebraic expressions
- solve functions online
- long hand division calculator
- algebra with pizzazz
- how to solve arithmetic +sqare footage problems
- how to solve a number to a fractional power
- factoring Pythagorean-theorem
- grade 9 in canada math
- function Least common multiple
- completing the square to eliminate a radical
- TI-84 calculators online that are usable
- fActivity on factorisation of quadratic expression/trinomial for year 10
- second grade adding subtracting lessons
- Pre algabra 9 grade
- lessons on solving cube roots
- expanding brackets and factoring powerpoint presentations
- cost accounting.ppt
- expression using exponents
- permutations combinations notes
- factoring two step algebra questions
- finding the average fractions online calculator
- free algebra quiz
- algebraic factor calculator
- virginia beach 9th grade intro to algebra step by step help online
- factoring rational exponents
- Dummit solution manual
- solving a radical equation using hooke's law
- slope worksheet algebra 1
- online graphing calculator that solves systems
- cheat sheet for a.c.t. test for 2009 for 7th graders
- online algebra 2 solvers
- math fraction worksheets free for third graders
- real life applications for using radicals
- Free Online Graphing Calculator with permutations
- complex simultaneous equation ti-89
- free sats year eight maths
- algebra graphs printable
- ti 89 differential equation
- free online sat model questions for 3rd graders
- absolute value functions of nonlinear equations
- basic algebra 6th grade patterns test and variable
- evaluate radical expressions roots
- how to take third root on ti-89
- college algebra calculator
- square root variable plus number
- college math help
- fraction sample test
- algebra equation standard form calculator
- solve system of equation matlab
- simplifying exponents answers
- system of equations solver complex number
- pdf on ti89
- calculator cu radical
- printable 9th grade math worksheets
- free algebra printouts
- complete the square using fractions
- algebra 2 answers glencoe all
- adding and subtracting decimals grade five worksheets
- free algebra problem solver FACTORING ax2+bx+c
- 5th grade greatest common factor practice
- algebra calculators rational expression
- how to write a fraction or mixed number in simplest form converter
- work sheets solving fraction equations
- easy way to learn greatest common factor
- what is the best algebra program to buy online?
- english aptitude questions
- how to solve quadratic equations on a ti 89
- multiplying and dividing fractions word problems
- FREE algebra for dummies online
- adding and subtracting integers worksheets
- how do you find the scale factor in math
- answers for algebra homework
- free online exam on c language
- convert mixed number to a decimal
- probability difficult aptitude questions
- simultaneous equations 1 non linear
- Nonlinear ODE Solver Methods
- variables with square root
- solve distributive algebra calculator
- college math work sheets online
- ti 84 software
- using algebra to solve circles
- grade 9 algrebra
- How to move the square root of a fraction from one side of the equation to the other
- trinomial calculator
- completing the square with a fraction
- prentice hall mathematics algebra 1 chapter test 5 answer
- polynomial factor calculator online
- convert rational to fraction
- negative and positive rules for addition, multiplication, and subtraction
- Answers to Saxon 6th grade Math Tests
- formula of a square
- slope worksheets middle school
- advanced equation solver ti 84
- factoring trinomials with tic tac toe method
- multiplying and dividing decimals worksheet
- formula for slope linear equation quadratic
- nth term
- addition and subtraction formulas for sin
- how to get a log base number on a ti-84 calculator
- how to write the 4th root on calculator
- downloads free math exercises powerpoint
- ellipse problems
- scale factor games
- hyperbola grapher
- find slope of quadratic equation
- algebra cross number puzzle free worksheets
- Algebra, a graduate course by Isaacs Martin I download
- holt algebra 1 textbook answers
- grade nine trigonometry
- free 8th grade geometry worksheets
- differential solver second order
- Algebra 2 chapter 5 answers
- ELEMENTARY MATH AREA WORKSHEETS AREA
- Algebra sequences finding nth term and sum
- finding vertex on graphing calculator
- permutation ks2
- solving simultaneous equations using circles
- example problems of rational equations
- middle school math with pizzazz book d answer sheet
- ti-89 tutorial 2nd order
- math division problems with comic free
- free download problems and solutions for complex analysis
- math trivias
- algebra explained free
- solving algebra problems
- example of nonlinear differential equations
Bing visitors found us yesterday by typing in these keywords :
Gr.9 trigonometry, math 7th grade formula chart, cost accounting notes online, algera pizzazz.com.
Free simplifying rational expressions calculator, runge kutta 4th second ODE matlab, "distance formula generator", free math worksheets 8th grade proportions equations.
Www.CPMalgebrabooks.com, scale factor 6 grade math lessons, subtracting integer word question worksheet.
TI-84 online version, worksheets signed numbers, college accounting 8th edition answers, factorising quadratics calculator, how to order fractions by finding their least common denominator.
Converting mixed fraction to decimals, Grade 10 math word problems with answers, Addison Wesley Chemistry workbook answers, kumon worksheets free, pre-algebra evaluate 33, how to solve math problems step by step, variable simplifier.
Coordinate grid worksheets and 4th grade, sample grade 9 algebra practice exam worksheet, usable online coordinate plane.
Word Problems for adding fractions, How To Do Algebra, algetiles used to solve expressions.
Linear algebra for beginners, trivias math, 8th grade algebra problemsolver, equation worsheets.
Sample test about simplifying exponent, cheats lowest common multiple, answers for solving system of equations, how to solve a nonlinear, homogeneous, 2nd order ODE, online problem solver for disk method, CHEATING APPLICATIONS FOR TI-84 COLLEGE ALGEBRA.
Steps on how to balance chemical equations, fonts downloads boolean algebra, formula finding the root of a number.
Math cheat sheet grade ten, prentice hall pre algebra answers ohio, algebra, distributing square roots, calculating rational expressions, how to take square root in java.
Solve a second order equation online in one parameter, Solving non linear Absolute Value Equations and Inequalities, early polynomial worksheet.
Multiply square root times cube root, Factoring cubed, factoring with cubed, subtracting functions and rationalizing denominators.
Quiz solving quadratic equations by finding square roots, simplifying exponents, math eog type questions.
Glencoe mathematics algebra 2 answers, india method for solving quadratic equations, simplify boolean expressions calculator, domain of quadratic square root function, alegebra solutions, aptitude questions free download.
Difference equation nonhomogeneous linear combination, 8th grade, 9th grade Physics calculate the slope of x vs t.
Contemporary Abstract Algebra 7th edition exercise answers, solve my homework.com, algebra softwqare, image ti89 rom.
Free worksheets on congruency and symmetry, free math worksheets scale factor, log in base 2 in the texas instruments 89.
Linear interpolation on casio fx 115, "A symmetric line with two vertices,", how to work integra in alegbra, how to do quad roots on a ti 83, word problems with negative integers, holt algebra 1 answer sheet torrent, free software for solving linear equations by matrices.
Ti-83 plus graphing linear equations by plotting points, Mathematics+Sets and Functions+9th standard+Tamilnadu Textbook corporation, jobs that use systems of linear equations, "glencoe mathematics" "algebra 2" "chapter 7" "practice test" answers, graph algebra equations.
Free answers to algebra questions, math trivia, Contemporary Abstract Algebra 6th edition solution manual, pizazz math download, quadratic equation + domain + range + zeros, Simple Math Rules for log bas 2.
Finding lcm of algebraic equation, *how to find the greatest common factor of a large number*, How do you simplify the sqaure root of 600, aptitude questions with answer, metric unit least to greatest calculator.
Adding and subtracting integers, exponential form calculator, free ti 84 plus games.
Answers for algebra 1 glencoe McGraw Hill mathbook, year 8 maths test online, how do i do a cubed root on my ti-86, gcse engineering cheating.
Making mixed numbers in decimals, Chapter 7 test answers holt mathematics, quadratic factoring, arithametic multiply fractions worksheet, saxon algebra 2 solutions, algebra 2 solver.
Linear interpolation formula ti, square root of fraction 3/4, linear inequalities worksheet, dialogue worksheets for third grade, adding and subtracting integer worksheets, solve limit problems.
Fraction to power, algebra 2 notecard, t189 calculator online.
Polynominal, factor 3rd order, simplifying calculator, dividing complex numbers cheats, negative fractions,7th grade, factor polynomials program ti-84, free division worksheets for 3rd graders.
Solve and graph, square root of 89 simplified, "Math B" trigonometry review sheet, factoring trinomials calculator, writing math equations and inequalities 5th grade lesson plans.
Solve a cubed root matlab, trigonometric calculator download, printable algebraic formulas, HOW TO MULTIPLY SUBTRACT AND DIVDE FRACTONS, practice algebra problems substitution, subtracting antiderivatives.
Concept of algebra, polynomial factoring calculator, Algebra percent activities.
Radicals calculator online, how to convert equations into standard form, second order differential equations in phase plane, activity worksheets for algebra tiles, Balancing Equations Calculator, sample probability problems+"ti-89", calculator free math radical.
Solving second order difference equation, decimal convert functions middle school, adding polar equations, writing quadratic functions in vertex form, authentic way to teach combining like terms, Test of Genius topic 7-b, quadratic three variables.
Intermediate algebra for college students homework solver, Maths Sequence Solver, pizzazz puzzle answers, complete the square cube factor theorem, 9th grade math worksheets.
Factorising machine, online help parabolas, lcm calculator for TI 84.
Free College Algebra Worksheets, gcse online maths quiz, how to solve a quotient formula.
Fifth edition pre algebra bittinger awnsers, algrebra solver, how to solve with the difference quotient, quadratic functions vertex calculator, Algebra for Dummies ebook, lesson plans college algebra quadratic equation.
Solving three variables ti83, free algebra exams online, decomposition into simple fractions ti, add subtract positive negative numbers worksheets, fractions calculator with radicals, introductory algebra worksheets, Algebra with pizzazz worksheets.
Multiply and simpify, 1st grade math adding 3-1 digit numbers worksheets, Algerbra solustions, www edhelper ANSWERE, multiply rational expressions calculator, ring of polynomials + abstract algebra + tutorial, quadratic inequalities cheat.
Prentice hall biology workbook answers, solving linear inequalities under certain conditions, Binomial equations foil solver, algebra software for pocket pcs, algebra and trigonometry Structure and method part 2 by McDougal Littell- answers, intergrated algebra explanation, ti-89 solve single variable equation.
Common Denominator calculator, least common denominator tool, dividing intgers, prentice hall mathematics algebra 1 all-in-one student workbook version a answer sheet, is T-83 a scientific calculator, ti 89 decimal to fraction, java code 4th order polynomial.
Factoring quadratic equations calculator, solutions quiz of physic, rearranging formulas, lesson plans and examples, 5th grade greatest common factors problems, third grade worksheets with patterns, even answers for mcdougal algebra 2.
Math gr 9 formula sheet, how to solve multivariable differential equations in maple, 'kids calculator printable activities'.
How to factor on ti-84 plus, write mixed numbers as decimals, mcdougal algebra 1 textbook online pdf, specified variable calculators, KS3 Mathematics Homework Pack E: Level 7 answers simultaneous equations, simplify square roots division, algebra program.
Iowa algebra aptitude test prep, example of a third order polynomial, online maths test ks3 level 8, Maths for dummies, boolean algebra + exams + solutions, balancing chemical equations in acidic solution.
Find the square root on a ti-83, combinations vs permutations 7th grade, free tutoring for algebra 2 textbook problems, java divisible by 6, Glencoe/McGraw-Hill: Graphing Linear Equations worksheet, exponential term addition and subtraction.
Yr 8 math, mathmatics - simplifying, sixth grade greatest and least common factor worksheets, how to work permutations and combinations 7th grade, Free Online Algebra Quiz, roots of factors(ax+by), BALANCING CHEMICAL EQUATIONS FINDER.
Slope intercept problem generator, distributive property worksheets to print, JAVA running summation example.
Hyperbolas made easy, learn algebra free, divide a polynomial by a binomial calculator, FIND FRACTIONAL NOTATION CALCULATOR, mrs.smith's 6th grade adding and subtracting mixed numbers quiz answers, lesson plan for logarithms base 10, learning algebra online for free.
Simplify (2a)squared, solve a formula for a specified variable with fractions, scientific calculator ti-83 chemistry logarithms, solve double algebraic formulas, type algebra 2 problem and get a answer, Adding, SubTracting , Multiplying, Dividing Fractions sheets.
Free Basic Maths Test, quadratic formula for ti 84, slope formula for TI-83, inverse functions algebra root, Who invented algebra, sixgrademathematics, convert square root of 3 to fractions.
Ti-89 and quadratic formula, mixed number to decimals worksheets, samples of math trivia with answers.
Factoring a cube root equation, free algebra problem solvers, ti-89 on pocket pc, first course abstract algebra fraleigh solutions, calculator online cu radical.
Linear algebra standard form, radical expressions calculator, balancing acid base chemical reactions, graphing inequalities worksheets.
Mathimatical poem, free printable math worksheets grade 9, wesley addison grade 5 math text books canada, factoring by grouping online calculator, free math worksheets 3rd grade adding and subtracting, algebra help specified variable, beginners algebra homework.
Sequence in real life, free fun ordered pairs worksheets, long math multiplying dividing adding subtracting integer.
Comparing and scaling lesson plans, easy beginners algebraic problems for 3rd grade, solving equations with fractional coefficients, tell me my answere for solving linear systems by adding or subtracting, grade 9 math functions test.
McDougal Littell World History Notes, Most significant bit calculator, how to write a mixed number as a percentage, how can we understand permutation sums in pure maths, third order method how to solve, how to multiply fractions on ti 83 calculator.
Using a 9 square box to solve algebra, free download aptitude test book, blank math vocabulary sheets.
McDougal Littell Algebra 2 Answers for free, solve simultaneous equations online, practice maths test for 6th grader, solve everything about a parabola, TI-83 Plus Emulator, completing the square roots restrictions.
Worksheet for slow learners, sample math triva of fundamental identities, ti-84 calculator download, Transforming Equations and Formulas calculator, glencoe online scientific calculator, converting mixed number to decimal.
Download sample aptitude test, square root exponent 1/2, how to compare fractions with different denominators, simplify radical to exponential expression, graphing unit step function ti-89.
Multiple choice grade 9 math printable practice exam, roots of quadratic equations, algebra 2 holt answers, calculator for quadratic equations by factoring in algebra 2, Linear line of life situation, simplify complex radical expressions, exponenets math video.
Math solving program, simplifying subtracting fractions calculator, how to solve radicals division.
Simultaneous equations solver, Algebra Definitions, 4 simultaneous equations.
Math trivia question with answer, rearranging calculator, percentage tutorial for 5th grade, algebra cheats, worksheet order of operation 5grade, converting a fraction to a decimal using a calculator lesson plans, worksheets on adding subtracting negative numbers.
Conceptual physics prentice hall notes, Glencoe math alg 2 workbook answers, how to do exponential expressions, ti-89 base 8, area worksheet.
Calculator for square root property, free eighth grade math work sheets, Free Online Algebra Problems Calculators, murrey math book, how to do square roots on ti 83 calc, algebra 2 answer book, 8th grade properties AND exponenets worksheets.
Holt algebra 2 radical expressions, How to Write a Complete Ionic Equation, free radical equations calculator, QUADRATIC EQUATION in function notation, combinations vs permutations 7th grad, what is the difference between simplify and evaluate?, 4th grade pictograph tests.
How to find exponential equations on graphing calculator, 5 sample, th grade algebra word problems, simplify expression using positive exponents calculator, solving quadratic functions with 3 variables, solving inequality in matlab.
Format excel square root, Applications of 2nd Order Differential Equations Justify with example, learning to do algebra easy way, Algebraic factorization.
Maths statistics online tests, how to multiply radical expressions with ti 89, cubed equations.
Factoring quadratic equations TI-86, 6th grade line graphs, converting decimals to fractions calculator online, tutorial + maths + base 8, math homework answers, pre- algebra worksheets.
Free download of course notes 8 of permutation and combination, Square Root Formula, graph linear equations worksheet, www.freeworksheetsonfraction.com.
Pre algebra problem solver, cross multiplication using variables worksheets, download roms for ti pocket emulator, antiderivative calculator of absolute value, elementary and intermediate algebra 2nd edition tussy, rational expression problem solving chart, how to use exponents on ti-30xa.
Linear interpolation formula ti-83, steps for dividing radicals grade 11, square roots printable worksheets, algebra substitution method.
Quadratic function + games, iowa test math practice 8 grade, convert 2-digit whole number to binary form using modular+java, distributive property with exponents and variables, t1-83 calculator rom.
Pre algebara fractions worksheets, walter rudin solution manual, free coordinate plane, advanced algebra and trig practice books, math basics combinations.
Free printable graphing worksheets for first grade, TI-83 Plus how to get cube, algebra matrix worksheets, TI-83 Plus Help Graphing two variables, free 8th grade algebra worksheets.
Solving a 2nd order ode, lesson plans for comparing and scaling, base 8 to decimal, how to slove scale factors, solving equations powerpoint, math practice 4th grade free print outs, statistics combination permutation formula.
Ti 89 pdf, Answers To Math Homework, basketball expressions, 10th class math algebra solving software, yr 8 printable worksheets.
Math inequality worksheet, prealgebra help for dummies, binominal fractions help.
Substitution method and fractions, solve my equation, reduce radicals ti-83, ac method algebra calculator, quadratics calculator, linear equation crickets.
Solve nonlinear differential equations, divide polynomial calculator, algebra 2 prentice hall mathematics indiana.
Solution differential equation nonlinear, easy way to solve algebra equations, what is the general trend in the solubilities of the alkaline earth metal ions as you move down the periodic table?, Contemporary Abstract Algebra 7th exercise answers, maths for year six practice for a big exam, rudin chapter 7, algebra trivia questions and answers.
How to solve linear systems ti 92, ti84 emulator, math for kids factoring, substitution in math- adding, subtracting, multiplying rational expressions with square roots, ti 84 mcduougal algebra program.
Online factoring, college algebra graphing prediction, calculator practice for 4th grade.
Free downloadable ebook on cost Accounting in practice, fractonal multiplication algebraic expressions, how to go from decimals to fractions, physics 6th edition glencoe answers, can the ti 86 be used for pre algebra.
Ti-89 mixed fractions, ti-83 plotting graphing hyperbola, log equations calculator, list of maths formulae, least Common denominators of 5 numbers, answers to prentice hall mathematics algebra 1, 4th grade math test india.
Slope quadratic equation, Given the graph of a linear equation: each point on the graph is a solution of the equation and each solution of the equation will be a point on the graph?, solving quadratic equations ti-86, easy way to remember how to square radicals, prime factorization algebra 1 print answers, free math worksheets, percent, interest.
Examples of math investigatory project, algebra with pizzazz creative publications, 8th grade math work sheeets, grade 11 math practise exam, lowest common denominator two quadratic equations, solving expressions.
How do select ten integers that have a mean of 7, median of 9 and a mode of 18,, algebra substitution practice, methods simplify square roots, algebra 1 littell chapter 7 homework, creative ways to teach algebra, ti83online calculator.
Change equations into transformational form on a ti-83 plus calculator, adding sqrt functions, using a regular calculator to find prime numbers.
How to solve college algebra problems for free, fractions for dummies worksheets, quadratic equation college algebra skills.
Find difference quotient algebra, solving special systems, what is domain and range of quadratics.
Simplify (2y^(1/5))4, percentage fomulas in maths, common denominator solver, liner graph with equations.
Rearanging logrithmic equations, solving simultaneous equations by substitution method powerpoint presentations, How do you simplify fractions with square roots?, teach me algebra free.
6th maths integer multiply, algebra 2, equation in vertex form, factoring quadratic expression calculator, learning fractions, ratio, and percent printable free worksheets, college algebra for dummies.
Ti-89 difference quotient, java graphs algebra, holt free logarithim problems, definition of hyperbolas, finding a variable as an exponent algebra, common denominator with variables zorro, math homework help multi-step equation.
Notes Algebra 2 permutations and combinations, cheating in algebra, printable 2 step equation 7th grade, printable worksheets practice finding slope from a graph, how to do cube root on calculator, scale factor worksheet.
Multiplying 3 fractions calculator, variable exponents, logarithm solver.
Ti-89 quadratic equation, pizzazz worksheet answers, graphing linear functions powerpoint, calculator that solve radical expressions, working out graph equations, radical expressions solver, "prentice hall mathematics algebra 2 help".
Aptitude test paper with solutions of different companies, factoring tree calculator, 5th grade worksheet on compatible numbers.
Manipulating equations worksheet, simplifying square root variable fractions, simplifying trinomial tips, worded long division problems, Advanced Algebra Scott Foresman and COmpany answers.
Graph a circle on ti 84, how to plot linear equalities, Factorizing Quadratic Equations online.
Math add sums ks3, free algebra rational expressions calculator, Algebra 2 Cheat Sheets, solving quadratic equations ti 84.
More example riddle of linear equation with 2 variables, trigonometry trivias, KS4 parabolas.
Standard 9th grade algebra problems, University of chicago algebra textbook answers, square root properties, Math worksheets for 2nd grade expanded notation.
Math power grade 8 ratio free test, tips to calculate the mathematical aptitude, balancing equations solver.
Simplifying squared and cubed algebra, Free Online Intermediate Algebra Tutor, find the missing denominator LCD, free completing the square worksheet, math worksheets on order of operation.
Write a quadratic equation in the variable x having the given numbers as solutions. solutions is 9, divide decimals worksheet, Monomial+definition+Ks3, homework help how to calculate factors, free online worksheets and solving equations with fractional coefficients.
Cpm assessment handbook math 1 answers, system of equations ti 89, standard form calculator, math for dummies website, program to Find all the numbers a number is divisible by, how do you factor cubed polynomials?, difference of squares calculator.
Why do we need to rationalize the denominator, how do you x cubed on a calculator, Language of Algebra free worksheets, free division worksheets for6th grade, algebra aptitude test sample questions, free printable symmetry worksheet, free practice grade 11 physics exam.
Introduction to permutation and combination pdf file, ti-84 factoring, proper fractions add subtract test, free prealgebra worksheets, diamond Algebra Problems, converting vertex form to standard form tutorial, beginning and intermediate algebra 4th edition online.
Saxon answers online, reduce expression to lowest terms calculator, houstin mifflin mathimatics book pages to do, algebra 1 problems and solutions.
Expanding simplified radicals, online activities with order of operations in math, Simplifying Algebraic Expressions calculator, i put in my math problem software solves it, free algebra games, formulas for fractions least to greatest.
Free worksheets at entry level 1 Maths, solving equations with fractions calculator, ti-89 Gini coefficient.
How do u solve a simultaneous equation on the TI 84 plus, formulae for algebraic equations, using Matlab to solve a simple differential equation, would you like to play again java, maths rotation worksheet, polynomial and factoring with power and division.
Learnig algebra, different of two squares , how do you do radical expressions, algegra 1 florida, linear equations printable worksheets, log with ti-83.
Finding log base on calculator, how to find out the nth term, math cheat sheet grade 10, simplifying roots pre algebra variables, Hyperbola made easy.
Decimal to square root, online maths test with results end of KS3, Printable Saxon Math Worksheets, free maths game on area, i want the questions to intermediate algebra 4th ed., by K. Elayn Martin-Gay, step by step indefinite integrals calculator.
Solved high school teachers mathematics entrance exam model papers, gmat planner worksheet, writing linear equations from a graph ppt, multiplying radical expressions, factoring program TI-84.
Polynomial factor calculator, prentice hall pre-algebra textbooks, can you simplify expressions on a calculator, common factors multiple worksheet, +"commutative and distributive" +properties math +children +games, writing matlab programs to solve the quadratic formula, simplify fractions with square root.
Simplifying linear equations with multiple variables, kumon G math free sheets, scale factor ppt geometry, algebra questions 6th standard, free algebra step by step solver, convert mixed fraction as a decimal.
Solve quadratic simultaneous equations, Algebra linear equations worksheets with answer key, simplifying exponents worksheets.
Cubed root for ti 83 plus, program to factor equations, scale factor 6 grade math how to teach, advantage of radical to expontential, inequalities worksheet , first grade, solutions to rudin real and complex analysis, mcdougal littell algebra 1 workbook answers.
Least common multiple algebra calculator, free word problem solver online, Algebra dividing powers of x, free worksheet order of operation 5grade, common decimals to change to square root fraction, geometric mean worksheets.
Programas para ti-83 plus, free software to solve math problem, college algebra programs for my casio calc, download pdf examen de aptitud, scientific calculator turn into fraction.
Prove square root of two times square root of eight equals four, "dividing monomials worksheet", free intermediate algebra eTextbook, java linear equations, algebra simplify steps.
Figuring out scale for 7th grade math, "real analysis with real applications" 2009, dummit abstract algebra solution, simplifying square root fractions.
Free math games online linear equations, using calculatorc for diferential equations, free online math tests for 11+.
Cubic of a quadratic equation. quadratic equation, sites de download de apps ti-84 plus radical simplify, can you add the square root of a number, how to cheat on algebra, fourth grade fraction worksheets, parabola graphing calculator, ti-84 +physical science.
Worksheets on adding subtrating numbers, great mathecians, algebra expression tiles, solve algebra problems free, lenear programming.
Hard maths for kids, multiplication and division of rational expressions answers, simplifying algebraic fractions online cheat, free powerpoint,basic addition and subtraction.
Subtraction and addition of fractions with negative sign, non homogenous second order differential equation solver, explains math textbook homework problems with step- by-step math answers, dividing integers games, algebra with pizzazz worksheets, sample graphing equations ti 83 log function.
Free calculator to solve fractional equations, adding, subtracting, multiplying exponents, printable third grade math word problems, how to solve partial differential equations in maple, pre-algerbra with pizzazz! book aa, free online differential equation solver, free printable ged books.
Answers to algebraic formulas, ti-83+ Slope formula program, coupled differential equations in matlab, fluid mechanics 6th solutions manual, binomial expansion, ellipse problems.
How to fractions texas instruments, elipse equation, prentice hall algebra 1 book answers.
Explain integral components in algebra, college algebra factor polynomials, online quadratic calculator, "area" "surface" "grade 9" "practice problems" "answers", lesson plan for multiply integers, games of algebra online for free.
Numbers with a factor of 3, advance algebra 2 midterm study sheet, what are the main uses of 2nd order differential equation.
Free ebook college algebra for dummies, nonlinear differential equation solution, scale factor questions, Factoring Trinomials---use FOIL and Trial and Error calculator, examples of combination in real life, algebra tutor, Graphing Absolute Value Equations ppt.
Cube root ti83 plus, sample word problems and solutions about bearing in trigonometry, Simplifying calculator, ged math lessons.
Monomial gcf tool, introductory algebra 8th edition homework online, mcdougal littell chapter 5 test answers, How to graph polynominal equations, free prentice hall mathematics algebra 1 answers, graphing equations on powerpoint, partial sums method.
Writing exponential expressions in their simplest form, free study guide to cost accounting fundamentals, help with math comparison problems, aleks cheats.
How to convert constant value into time + java, square root simplest radical calculator, 8yh grade pre-algebra help free, third order quadratic how to solve, slope program on graphing calculator, practice questions and answers completing the square math.
KUMON EXERCISE, what are rules for integers in polynomial equations, factoring solver.
Formula of Optional Math, java is divisible by, Basketball worksheets for kids, solve algebraic functions matlab, what is the formula to convert fractions into decimals, online 8th maths worksheets, maths games square and cube numbers.
Highest common multiple of 110, how to solve linear equations with the T1 84, how to enter LCM in TI-83 calculator, inverse log on ti 89, simplifying complex root.
Ti-83 plus algebra solver, rearranging formulas, First Grade Homework Worksheets, free english aptitude questions, convert decimals to mixed number, list of math trivia, yr 8 maths worksheets.
Holt mathematics fraction examples, fractions lesson plans first grade, activities for 5th graders nets for cubes, answers for algebra for college students book.
9th grade algebra free worksheets, how to find stretch factor of an equation, algebraic factoring program, how do i do cubic square roots in a TI-30X IIS, math tests on grade 9 exponents, online calculator for figuring statistics factorial.
"grade 9" "math problems and answers" "surface" "area", grade 9 sample test math, hard equations, texas holt algebra 1 "Holt, Rinehart and Winston" scientific calculator, Solving Algebraic Expressions, log change of base algebraic examples, glencoe algebra 2 workbooks.
Unit 4 multiply and divide fractions lesson 48, ti-83 online calculator program, radical simplification chart, online solving Simultaneous equations, worded problem in +quadratic equation with solution, math practise work sheets for grade 7 area and volume.
Scott foresman math Algebra ebook Download, factoring with a TI-83 Plus, factoring expressions worksheet.
Free step by step algebra solver, remediation lesson plans algebra, worksheet on scale factor, Past KS3 Sats Exam Papers, taking the cubed root of a variable fraction, algebra with pizzazz answer, formula in getting the greatest common factor of 2 numbers.
C CAT, sample papers, Grade 4, Ontario, ti-89 graphing curves, vertex form with stretch factor, how to simplify square roots expression, ti83+ tutor partial derivatives, problem about ellipse.
Algebra fractional equations variables, algebraic converter, decimals , calculator, estimation, boolean algebra simplify calculator, adding and subtracting log.
Math 8th grade pre algebra 2, picture of algebra 1, binomial expansion factoring algebra, java would you like to play again loop, solver quadratic equation, java, graph, Fraction Formula Chart.
Multi-step algebra equations worksheets, algebra with pizzazz pg 169 answers, writing linear equations project, coupled nonlinear differential equations solving using MATLAB, introductory algebra tutoring, free online accountancy book.
Adding square roots with exponents, linear "inequalities calculator" download, cube root key on calculator, ti 84 graphing calculator emulator, homework answers in statistics book.
Printable english 2007 gcse, quadratic equation college algebra evaluation, trivias about fundamental forces.
Higher school games for topic permutation and combination, science 9 practice online exam, solving expressions and exponents solver, critical ordered pair logarithmic function.
College trigonometry tutor software, free download of engg. aptitude test questions, beginner algebra worksheet.
Basic function equations worksheet, algebra 1a 7th grade help worksheets, nth term sample questions, printable ged practice worksheets, simplify basic equations with fractions worksheet, algebra 2 solvers, math problems+sixth grade+basketball.
Lesson 4.2 multiply and divide fractions, Precalculus Online Problem Solver, least common multiple game.
Answers to key to algebra, online balancing equations test for grade 10, ti-84 emulator.
Algebra errors, three step sequencing: free printout, calculate gcd, adding positive and negative fractions worksheet, free math sheets graphs.
Mixing solutions algebra, chemistry linear equation systems, free pre-algebra USA TURTORS, What are situation and solution equations 5th grade, explain simple steps to square root and cubic.
College algebra quadratic equation discussion, implicit differentiation calculator, how to change and answer from a decimal to a fraction on an ti 83.
Solving quadratic equations with the ti 89, I don't understand grade 9 math!, show work for addition and subrtraction of fractions.
Algebra 1 taks worksheet, 4th grade Geometry worksheets, common demonator worksheets problems.
Evaluating exponential expressions, scale factor calculator, clep college mathematics for dummies, teach me algebra.
Why use the square root method, permutation, combination and binomial ppt, how conver a decimal into a square root, ti 89 converting number to square root, pearson prentice hall algebra 2 textbook answer key, solve equation for specified variable, ALGEBRA 1 ANSWERS.
How to cube root on ti 89, how to solve difference quotient equations, example solutions to second order differential non homogenous solutions, grade six math quiz ontario.
Graphing an equation involving absolute value in the plane, solve formula for specified variable calculator, how to find square root of exponents, 5th grade free sample worksheets on ratios.
System of two linear equations solving matlab, free sats paper for math year 7 ks3, ti-89 trigonometry downloads, holt algebra 1.
Using t83 for slope activity for middle school students, rational expressions online calculator, simplify degrees with a variable, Online Ti-83 Calculator.
Fractions with negative exponents, radical simplifying calculator, solving an equation with rational exponents by factoring, mathmatic review of lcm (least common multiple) gcf (greatest common factor), GRAPHING LINEAR EQUATIONS powerpoint.
Online simplify radicals calculator, 5th grade math practice worksheets percentages, how to rewrite division as multiplication.
Worksheet gears, holt physics answers, examples of grade 10 math free, integer worksheets-adding/subtracting.
Convert polar to exponential, negative integer worksheet, use linear function to solve for two known x's, reducing a polynomial calculator, hyperbola problems.
Use graph determine roots of equations, grade nine slope examples, long division with radicals, rational expression calculator, ALGEBRATOR, solving equations with like terms, integers online game interactive.
Factor quadratics calculator, solve online equations show steps, simplifying addition and subtraction radicals, hardest math problems in the world.
Help with gr 9 math, worded problem in +qudratic equation with solution, A symmetric line with two vertices,, grade 11 physics formula and past exams.
T183 plus calculator emulator, prentice hall's physics teacher answer, perfect squares worksheet and radicals.
Systems of linear equations with matrices TI 83, free online algebra solver, math homework help college algebra.
Algebra software coparison math, multiplying powers and factors, prentice hall algebra 1 1998 california edition, free online chemical equations solver, the value of TI in numbers, ode45 second order ode, nys math tests sixth grade.
Fractional exponents grade 11, monomial+term+coefficient+definition+ks3, help with math scale factors, answer algebra questions, simultaneous equations ppt.
Online Equation Solver, solve ti-89 symbolic, Worksheets " "Linear Relations" Homework, gcse questions of complete the square, how to download algebrator to your calculator, algebra combining like terms calculator fractions, reducing radicals with exponents.
Systems by elimination calculator, 2 step algebra practice, ti 84 download, free ti 84 emulator.
Taks math powerpoints, SIMPLIFYING EXPRESSIONS CALCULATOR, free geometry worksheets for middle level, factoring a quadratic calculator, simplify roots calculator, Radical Form, how do write a mixed number to a decimal.
Multiplying cube roots, Sample paper of eigth class, trigonometry trivia with answers, TEKS Holt Algebra 1, Contemporary Abstract Algebra solution pdf, BEGINER ALGEBRA WORKSHEET, solve variable with fractional exponent.
Free Notes & Lesson on Introduction to Functions Intermediate Algebra, FACTORISING IN COMMON ENTRANCE, free online TI 87 calculator, free clerkship aptitude tutorial download, explain the difference between two dimensioanal shape and a three dimensional, math calculator + dividing rational expressions, 9th grade geometry textbook Glencoe Mathematics.
Given data write the equation of the quadratic equation, equation for identify the domain, worksheets solving fraction equations, inequalities with addition and subtraction 8+? >26 four second grade, sample of math trivia question, ti-voyage Gini coefficient, math worksheets for juniors.
Permutations and combinations 6th grade, reduce rational expression, simplifying a square root on top of a fraction, Differential Aptitude Practice Test for 6th grade..
Factor with a cubed number, distributive property review algebra logarithms, mathamatics diffrent types of questions.
Java solve polynomial equation, sample papers of class VIIIth of KV, free online tool for solving algebra problems, convert whole number to decimal, how to factor an algebra equation, GRE prep book permutations combinations probability.
Square of decimal numbers, fractions from least to greatest, trig answer, finding volume on ti-84, all math formula in one sheet, need some least common multiples worksheets, free maths homework sheet.
Ti-30x changing decimal to fraction, free PPT princple accounting, multiplying simple integer worksheets, college trivia worksheet, 4th grade fraction test, Free answers to algebra 1 prentice hall mathematics, exsample of math trivia.
"abstract algebra" algebra online problems and solutions, solve linear equations by graphing worksheets, how to solve square roots with variables, Math Trivia and Facts.
Algebra with pizzazz answers worksheets, put x and y on graphing calculator, 6th grade algebra quiz equations.
All Math Trivia, free mathmatic review of lcm (least common multiple) gcf (greatest common factor), maths standard form poetry, math yr 8.
How to solve 4th order polynomial equation on matlab, math grade 10 algebra, negative slope tI84, sixth grade online McGraw Hill math books, square roots of exponents.
Program that solves the difference quotient, subtract trinomials, You lost a factor of two inside your radical., Online Solver Algebra, mixed completing the square.
Online graphing trig calculator, solve key on graphing calculator, solving quadratic equations using matrices, free ebook the americans mcdougal littell, graph solver, free algebra worksheets, value of expressions with exponents.
Matlab 2nd order ode, GLENCO STUDENT ONLINE SAMPLE PAGES, expression calculator simplifying powers of i calculator, difference of square help.
Three Value Least Common Multiple Calculator, free doc mcqs of biology for ninth class, polynomial equation solver class java.
Mcdougal littel worksheet, multiply and divide rational expressions, formula maths questions, ppt coordinate plane, proportion problems printable, sample problems and solutions on bearing trigonometry, graphing and equalities.
Differential equation solving homogenious, Free Intermediate Algebra Problem Solver, all of grade 10 keywords, radical practice worksheets, square root of an exponent, simplifying variables to the negative fraction exponents, phoenix calculator game.
Algebra equations factoring, how to do algebraic equations, fractions division with integers, algebra 2 holt "Practice Workbook" answers, suare root calculator.
Quadratic formula games, 8th grade algebra 1 midterm study guide, binomial expansion solver, when was graph paper invented?, t 83 calculator download, free worksheets math quadratic formula, teacher plane book free download.
Differential equation editor in simulink, saxon math algebra answers, factor quadratics examples, how to sovle a third order polynomial, algerbra, sample questions, grade 9 math.
Decimal to mixed numbers, automatic pre-algebra solver, time ratio formulas, 10 examples of addition and subtraction of similar fractions.
Free Sats Papers, Saxon Math Algebra 1 ANSWERS 9TH, parabola standard form conversion.
Solving non constant differential equations, roots and radicals, cheat sheet to algebra 1 chapter 6 worksheet, factoring equations fractions, how to enter in exponent outside square root in calculator.
Linest equation, liner equations, table of values in mathamatics.
Math help.com/be able to ask a problem and get the answer for free, how to calculate angles for GED test, mathematical definitions algebraic expression, balancing equations calculator online.
How to do scale factor, simplifying expressions with exponents, solved aptitude questions, online algerbra, holt pre-algebra online workbook, make algebra worksheets three linear equations.
KS2 Measurement word problems worksheet, dividing two fractions and dividing two rational expressions, math geometry trivia with answers, Permutation and Combination ppt pdf.
Texas ti-89 Fraction, yr 4 equation worksheets multiplication, roots exponents, free online ti 83 calculator graph, GRADE NINE MATH- PRACTICE SHEETS.
Algebra 2 online textbook, Printable Math Problems 1st Grade, graphs of equations involving absolute value.
How to solve a limit with trigonometry functions, polynomial sequencepdf, polynomial simplified radical form, answers to math homework, factor three different variables, exponent of a square root, example of problem solving involving the addition of binomial.
SAXON ALGEBRA 1 BOOK ANSWERS BOOK, explanation of trigonometric functions for algebra students, math quadratic poems poems, free linear equation worksheet, cheat sheet for adding negative numbers.
Dividing two variable polynomials, maths square root find, multiply 3 x 3 matrix applet, simultaneous equations on the TI 84, complex rational functions.
How do you solve adding and subtracting mixed numbers?, use graph to find domain inequality, www.mathsquareroots.com, pre algebra answers alan s. tussy, subtract rational expressions, square roots - # on top of the radical, free worksheets on multi step equations.
How to turn fractions in to deciamals decimal numbers, cost accounting free book, find the lcd of a fraction calculator.
Grade nine math questions online, Math Trivia with Answers, maths formulas to print, adding subtracting multiplying dividing fractions, foil solver, answeres for algerbra 2.
Math worksheets for order of operations grade 6, how to solve a nonlinear homogeneous 2nd order ODE, ti-89 calculator solve quadratic equation, rational fraction calculator with variables, program to multiply 3*3 matrix in java language.
Scott foresman biologychapter 7 section 3, factoring online, how to use the graphic calulator entering system of equations, special product and factoring, solving for a specified variable.
Linear algebra done right solution, beginners exponents chart, deviding and muliplying integers worksheets, Guess Question Paper of VIII class, trasnformation worksheets 8th grade, grade 9 math + Ontario + equations and inequations.
Power point presentation using two color counters in teaching math, common divisor of 24, solving simultaneous equations with quadratics ppt., explaining the concept of a limit to a ninth grader, College Algebra 8th edition McGraw Hill chapter one notes.
Roots and radicals with variables, exponent word problems number of combinations, ti-84 display regeq, simplifying algebraic expressions cube roots, printable conversion table for 6th grade algebra, TYPE IN factoring online, printable math review worksheets for translations.
Ellipse graphing calculator, nonlinear system of equations +maple, radical multiplication solver, beginner's guide to algebra, solve matrix software student, how do you square something on a calculator.
How to solve a simple second order differential equation, binomial cubed, standard factoring algebra, simplify the square root of 1/3?, solution third order equation.
General Aptitude questions and English grammer Download free with solutions, math problem help scaling, factoring rules "grade 9" sample problems, how to solve double variable algebraic equations.
Games on division of rational expression, write a rule find a common difference function powerpoint, holt physics solution manual online, prentice hall algebra 1 problems, Third Grade Math Worksheets.
How do you multiply 3 long hand, boolean algebra on ti-89, Physics Worksheet Printable, boolean algebra simplification program.
How to solve a radical equation on TI-83 Plus calculator, variable fractional exponents, salinas ca pre algebra book, free eight grade online angles tutoring demo, mcdougal littell books online, 8th grade algebra 1 prentice hall.
Scale factor problems, simplifying complex algebraic equations, free key code to holt chemistry book, solutions to dummit and foote chapter 14.
Factoring a trinomial with a cube, essentials of elementary statistics ti 84, solving algegraic trinomials, simplifying hard radicals.
Simplifying cube root expressions, calculator that can factor?, how to enter equation with unknown variables on a TI-30xs calculator, plane trigonometry problems simplified and integrated, calculator radical and rational exponents.
How do i convert slope to degrees using a TI 86 calculator, online polynomial degree finder, printable math worksheets solving proportions, free online step by step integration solver, fraction worksheets, how to solve an equation with two variables, algebra two steps worksheets.
Math analysis cheat sheet, 8th grade algebra slope intercept form, algebra problem slovers for free, infinitely many solutions for the quadratic equation.
Iowa algebra test, algebra solver for free online, mixed number to a decimal, 3rd grade star test model paper, radicals restrictions, factoring trinomial square calculator, how to pass an algebra exam.
Highest common factor of 17,32, free ks3 papers, free online algebra II homework help, conceptual physics sheets, simplifying radical expressions worksheet.
Balancing Equations Online, simultaneous quadratic equations, simplify radical program ti, free exponent and order of operation worksheet, simplify +poloynomials, subtracting Equations with fractions.
Download the TI-83 Plus calculator, McDougal Littell Geometry answers, ppt completing the square.
Multiple variables in algebra equations, +scott foresman biology chapter 7 section 3 grade 9, 3rd order polynomial, logarithm equation solvers, how to solve a function when given f.
Graphing linear equations worksheet, square root simplify distributive property, algebraic translation worksheets, quadratic formula solver(find the zeros).
Exponent in quadratic equation, square root exponents, 9th grade algebraic equations free worksheets.
Grade nine math exam online review online quizzes, basis math promblems, algebra 1 worksheets for chapter 6.
Where can i go to solve algebra problems, college algebra practice math problems, cube root key on calculatory, free fraction with exponets calculator, algebra math calculator solving substitution.
Difference quotient with fractions, the hardest math test in the world, simplifying radicals online calculator, y intercept finder, intermediate accounting book download, equation factoring calculator, CPM answers.
Broward schools 6th grade mcgrawhills math workbook, prentice hall mathematics+algebra 1+answer key, free math worksheets and permutations, factor out equations.
Iowa algebra aptitude test sample, prentice hall geometry worksheets, www. free calculator elementary and intermediate college algebra.com, third grade math practice sheets, evaluation and simplification of an expression, Free Elementary Algebra Worksheet.
Adding and subtracting integers easily, ti-83 plus cubed, math trivia for algebra, chemistry for dummies free download, free inequality graphing worksheets grade 9th.
Need help solve plotting points, free online english tests for ks3, 9TH TAKS OBJECTIVE GAMES, common square and square roots chart.
Exponent solver, samples of printable pre measured graph paper for algebra, premutation and combination notes, distributive property and square roots, beginning algebra online test quiz, balance equations online.
Free Math Problem answers Online, mastering chemistry cheats, simplifiying expressions solver, Basic Probability Math Formula, calculate log on calculator, beginners algebra help.
Ti 84 plus fraction, glencoe algebra 2 worksheet answers, differential equation first order practice, algebra 1 worksheets printable, examples of fractional linear equations in algebra.
Solving complex rational expressions, SIXTH GRADE MATH LESSON PLANS GCF, how to solve complex rational expressions, cubed polynomial in algebra, third grade fraction worksheet free, accounting skills test download.
Simplifying cube roots worksheet, how do you solve for a variable squared, how to simplify square numbers, free online tutoral help in math 7th grade, multiplying standard form, how to convert the vertex of a equation.
Graph of 2x^3, gcse algebra worksheet, TI-84 plus mixed number, basic ratio formulas.
Simultaneous equations powerpoint presentations, solve college algebra problems, Linear Algebra with applications fourth edition otto, www.numbes math.com, different ways to learn synthetic division, online free math tutor the elimination method, grade nine math work.
Qudratic function, printable yr 10 trigonometry worksheets, factorize complex equations, factor difference of squares calculator, Algebra Structure and Method Book 1 online download.
Common multiples chart, glencoe algebra 1 answers, hard algebra questions with answers.
Online greatest common factor finder, prentice hall mathematics algebra 1 answer key, programs for algebra II, exponent that is a fraction on ti 83, pre-alg equations with negative and positive numbers for 8th grade, math worksheets with fractional coefficients.
Dividing fractions for 6th graders, conceptual physics 10th edition answers, vertex of quadratic function word problems, solving homogeneous differential equation, 9th grade pre algebra, how to multiply radicals by integers.
Division calculator that shows work free online, free pre algebra worksheets printable, chart on how to add, multiply,subtract and divide integers, expression quadratic equation graph, grade 7 math integers worksheets, nonhomogeneous boundary condition partial differential equations.
How to solve a system of three exuations on a calculator quadratic, math geometry trivia with answer, factorise generator, graphing algebra calculator online, free algebra equations online for grade 8, subtraction practice checking by adding worksheets.
Solving complex equations on matlab, factoring binomial calculator, percents 5th grade worksheet, integral solver intmath pay, poems for balancing equations poems 4th grade, online equation finder.
Holt algebra 1 book, FOERSTER ALGEBRA 1 TESTS, algrebraic equation practice problems.
Slope on a ti-83, algebra linear graph paper, how to make decimals into mixed numbers, answers to problems in introductory and intermediate algebra third edition, simplifying cubed, solving combination problems 5th grade, math worksheets on direct and indirect proportions.
Solve long division of polynomials online, basic algebra for kids free worksheets problems, help with grade 10 algebra, can i get a preview of mckeague's chapters for intermediate algebra 8th edition?, solving a 3 variable equation with the ti83, printable math evaluation test for slow learners in grade 1, sample basic accounting worksheet.
Common bionomial factors, ti-86 simultaneous equations, brain teasers for middle school functions worksheet, negavtive numbers worksheet, complex numbers in simult eqn solver, virginia algebra 2 book, my algebra solver.
How do you solve fraction equations, steps on how to do algebra, graphing hyperbolas ellipses circles, free factoring programs for ti 83, solve my radical equation, free 4th grade poetry worksheets.
Formula from table slope intercept, step by step balancing chemical equations, grade nine math, balanced equation calculator, grade 10 math exam cheat sheet, numbers with multiples in common with 66.
How to factor out GCF on ti 89, linear equations with rational exponents, reverse foil method calculator, solve linear equation matlab, 10th grade algebra 2 ratios, solving simultaneous equations in additional mathemathics.
Solving equations with fractions, variables and whole numbers, 10th grade geometry games, algebra calculators, convert fraction to decimal CALCULATOR, online third degree polynomial solver, second order differential equations by substitution.
I type in quadratic and you factor it for free, prentice-hall world history worksheet answers, learning integers for dummies, "quadratic equation graph", simplifying exponential expressions.
Formulas of free concrete lectures, expanding brackets online calculator, practice worksheet for factoring, online cubed root calculator, factoring trinomials with 4 terms calculator.
Simplifying fraction with negative square roots, algebrator long division polynomial, solver ti, system of equations substitution calculator.
Greatest common factor table, high school math equation sheet, printable working sheet for first, simplify radicals calculator.
Online algebra tests for 8th grade, factoring complex quadratics, solving by elimination.
Partial sums method 2nd grade, algebra 1 answers free, how to convert a decimal to mixed number.
Algebraic fraction calculator, software for college algebra, online Calculator with square root.
Solve linear equations using matlab, math poems ( slope form), cubed square root +calculator, exponents with variables and numbers, x2 + 2xy + y2 = (x+y)2 on ti-83 calculator.
Proof that the square root of a+b does not equal the square root of a + the square root of a, solved examples of addition of mixed fractions, ti-89 step by step quadratic equation, convert a mixed number to a decimal, elementary & Intermediate Algebra textbook third edition, conversion table length 3rd grade printables, solving graphs cubed.
Answer Key for the Textbook Conceptual Physics, how to simplify the cube root of 6, common denominater with numbers and variables, beginners algebra video, ti 89 non algebraic variable in expression error, intermediate algebra 10th ed lial ebook.
Trivias about math, prentice hall math workbooks, aptitude question answer book.
Algebra question yr 10, holt, rinehart and winston algebra 1 answers, maths scale factor, Practice Grade Nine Probability Problems, TO FACTOR CUBE NUMBERS, what is my rule for third grade math, answers to high school foundations of science worksheets.
Mathematics square formula, online calculator for equation having 3 variable, factoring binomials worksheet, 2nd order differential equation calculator, teacher supply in san antonio.
Algebra 2 mcdougal littel, factorise any enter quadratic, mathematics for dummies, geometry worksheets for 8th grade, free online answers to saxon algebra 1/2 2e, "slope field" generator, how to write 2 to the 5th power on the ti 83.
Alegebra problems, third root ti89, Combinations and Permutations+identities.
Quadratic equation calculator casio, proportion worksheet problems 9th grade, Prentice Hall Mathematics Texas Algebra 2 Workbook, free online trig calculator, Online calculator, simplifying, trig answers, percent worksheets.
Give free exam for grade 11 university math, Fractions to decimal worksheeet, T1 83 Online Graphing Calculator, prentice hall algebra 2 answer key, a example of second grade math test for georgia standard, solve homogeneous second order differential equation, holt math reviews.
Free download aptitude test question, steps in balancing chemical equations, online algebra calculator for rational equations, worksheet Chapter 2 section 2 a b middle school life science Mcgraw, power point demos for Algebra mixture word problems, grade 3 math worksheet quiz.
Use caculator rewrite fractions and mixed numbers as decimal, creating a java program to read two integers and determine and print whether the integers and the sum of them are divisible by 3, how to factor algebraic equations.
Beginning algebra fifth edition san francisco mcgraw hill, how to solve complex nos in a quadractic equation with higher power, standard form of an equation converter.
Adding radical expression calculator, Free Pre Algebra Test, yr 7 simplification worksheets, Polynomials for beginners, proportion questions maths work sheets, solving simultaneous equations using matlab, TURNING DECIMALS INTO FRACTIONS CONVERTER.
Nth term calculators, graphing calculator to do completing the square, solving complex linear equations fractions, free download banking examination english,aptitude,maths, ti-82 decimal to binary.
Ti-89 + vertex form, maths project work on sqaure pyramid, writing an equation for a line vertex form, quadratic activities +"high school", decimals, distributive property cubed, free homework sheet sheet.
Finding the squareroot online calculator, graphing inequalities worksheet, free algebra solver download, matlab permutation, solving binomial equations, algebra made easy solve for x tests worksheets.
How do you change a mixed number to a decimal, dividing polynomials calculator, square roots ti 89 index, online test papers free yr 7, statistics definitions worksheet, simplify this trinomial, access code for intermediate algebra 4th edition.
Program for drawing graphs for ti89, Fractions least common denominators calculator, factoring cubed, how to solve second order differential equations nonhomogeneous, 2nd order differential equations with matlab.
Free algebra solver radical solver, parabola graph calculator, solving quadratic equations ti-89, 3d coordinates worksheet gcse, solutions abstract algebra hungerford, matlab simultaneous nonlinear equations.
Example of a 3rd order polynomial, algeraic free online calculatorsto solve the lcd, square root calculator with exponents, rational equation calculator.
Grade formula to pass exam, how to enter negative value on a ti 83 plus, solve by substitution calculator, rationalizing square root convertor, finding and solving least common denominators, least to greatest fractions solver.
Second order differential matlab, ti-84 graphing calculator simulator, substitution method in algebra.
Yahoo visitors found us yesterday by typing in these math terms :
Hard 5th grade coordinate and measurement worksheets, Algebra Buster, teaching circle graphs.
Applications of trigonometric and circular functions in real life, solve slope formula, can you get square root of a negative number, define inverse linear relationship.
Convert vertex form to standard form algebraically, college algebra software, mcdougal littell worksheets, free 10th grade geometry practice tests, algebra with pizzazz worksheet page 208, formula for changing numbers into decimals, Adv. Algebra homework answers.
Online quadratic simultaneous equation solver, Slope Intercept Formula, algebra 2 answer keys, + ppt trigonometric identities, online algebra solver, AVERAGE word problems about fractions and decimals, prentice hall mathematics pre algebra workbook online.
Grade 11 math revision help , canada, glencoe advanced mathematical concepts textbook answers teachers edition, algebra definitions, vertex grade 10 math, prealgerbra, simplifing exponential expressions, free high school algebra 2.
Easy fraction problems with explanation for 7th graders, ninth grade math worksheets, Rational Expressions Calculator, free aptitude questions for analyzing.
Exercises rudin, solve math problems "for free", examples of the quadratic formula with square roots, simplify square roots on ti-83 +exact, nth route calculator.
Fractions to decimals calculator, decimals add and subtract worksheets, learning algabra, algebraic equasions, algebra helpers download, Formula For Square Root, radicals in numerator of fraction.
Writing quadratic standard form, logarithms + base 10 + quiz, formulas for basic trigonometry for 7th grade, how to solve quadratic equation with three variables, ti calculator interpolation, write mixed number decimal.
Find real solutions of the equation square root calculator, percent proportion worksheets, system differential equation matlab, year 8 standard imultaneous equations, Free 10th Grade math tutorial, algebra problem software free, square root equation and expression calculators.
Grade six math ontario, second order differential equation solver, what is the cube root of 800, factor quadratic equations program, Answers to Algebra 2 book, on line common denominator calculators, free 9TH GRADE PHYSICS software downloads.
Calculating Perimeter Worksheets, highest common factor of 26 and 130, Algebra with Pizzazz creative publications, factor expressions calculator, Solve My Algebra, free practice problems for GED, how to learn algebra fast.
Finding zeros of quadratic equations, permutations and combinations online tool, tutorials to find linear equation line graph, easyway to calculate aptitude questions.
Pre algebra prentice hall fifth edition, multiplying and dividing radical expressions, math investigatory project, mcdougal littell world history textbook multiple choice answers.
Software, formula of intercept, T I 83 free download, calculate lineal distance, ti-83 log base, do you square a fraction when computing volume?, free online class 10th maths text book.
4th grade fractions worksheet, objective mathematics problems, how do you solve multivariable squared variables equations in calculus.
Difference quotient problems with solutions, maths for 7th online free, beginner algebra help, trinomials decomposition method, math fun scale factor, solutions for mcDougall Little algebra and Trigonometry 2.
Algebra elimination calculator, solving linear equations using the TI-83, ti-83 plus=how to cube a number, free download of ti - 84 plus, quadratic ti-89, philippines local studies related to infant mortality.
Pre Algebra Algebra combining examples, convert .375 into fraction, zero factor property calculator, how do you graph a square root function, ti 84 plus rom, greatest common factor calculator algebra, online graphing calculator for linear equations.
Subtracting with unlike denominators worksheet, dividing fraction solver, integer worksheets grade 8, ti 83 to solve differential equation, free algebra 9th grade.
Printable exponents quiz, free download method to find squareroots+ppt, prentice hall pre aglerbra practice 5-7, Online Algebra Calculator Functions, "how to add binary" ti 89, system of equations grapher.
Do My Algebra, solving equations with fractions as coefficients, problems of abstract algebra, Math Made Easy Worksheets, excel simultaneous nonlinear equation, online checking of complex number expressions, webquest logarithms.
9th grade worksheets, simultaneous quadratics questions and answers, free online calculator for solving by substitution, florida test prep workbook for holt middle school math,course 2 answers, explain the properties of square roots, find difference quotient on TI-83, BALANCING EQUATIONS FINDER.
Finding slope on TI-83, Algebra : Structure and Method - Book 1 ebook, decimal to radical fraction.
Free first grade math lesson plans, Year 8 Maths Revision Worksheet, calculator that solves logarithmic variables, percent equation calculator, simple polinomial worksheet, algebra 1 printouts.
Basic least common multiple, 26447, TAYLOR series MAPLE implicit function, reducing rational expressions calculators, Polar Equations ti83.
Simplifying square roots expression, math substitution method calculator, sum or difference of two cubes CALCULATOR.
Free Online Math Problem Solvers, test of genius middle school math with pizzazz, ks 2 and 3 English Question papers for exams practising, flip flip of adding and subtracting integers, how to solve like terms.
Free algebra problem solver, free online equation solver, second grade math trivia.
Revision sheet math grade 5, simplification of cubed equations, ti 84 free emulator, free step by step algebra problem solver.
Online factoring, graphing trivia, free how to complete the spuare by factoring generator, inverse log base 10 TI-89.
Square root with variables, second grade probability test, square fractions.
Math for second grade printables free, 6th grade fraction word problems and equations, how to find the lcm of two numbers using the ladder method, graphing calculator- finding factors, 4th grade print outs, 5th grade science taks practice printables.
Least to greast fraction worksheets, solve an equation in maple, simplify exponential.
Online fraction and regular number calculator, activity master 6 rectangular coordinates math 76 third edition, algebraic factorization kids, symmetry of triangles.
Passport to algebra and geometry textbook online help, How to Simplify Variable Square Root Equations, Free Online Algebra Calculators.
For square root sixth grade, solutions for texas algebra 2 glencoe, aptitude test downloads, variable located within square root.
Pc graphing trig calculator free, system of two linear differential equations solving matlab, VA grade 6 patterns, functions, algebra sample test questions.
TI calculator roms, java integer divisible, solving grade in excel, factoring equations, square root problem solver, Solve boolean Multivariable Theorem.
Factor by grouping calculator, perfect 6 roots, radical converter, graphing calculator online with table.
Free cost accounting book, inequality worksheeets, +free online caculator for elementary alebra, square root decimal, dividing expressions, calculator, algebra.
Math trivia in algebra, free printable brain quest workbook, how to divide longhand, roots as exponents, glencoe algebra chapter 5 test, log base 2 on calculator, Glencoe Division answers algebra 2.
Rational exponent word problems, solving multiple difference equations matlab, solve eval matlab, solving second order ode with ode45, course notes 8 of permutation and combination.
Free chemistry worksheets year 9, roots of an equation excel, Free math grade 11 video tutor help.
Formula for chord ofa circle, quadratic formula solver for ti-84, teach me scale factor, java code polynomial evaluation, rearranging solver online.
Solve scale factors online, math 5th grade variables and expressions powerpoint lessons, simplifying radicals using addition and subtraction, math trivia geometry.
Line solver slope formula, online nth term calculator, high school algebra SOFTWARE, ask jeeves negative numbers on a calculator.
Simplifying roots pre algebra, fraction expression, glencoe Using the Pythagorean Theorem worksheet, what is the lowest common multiple of 75 and 100.
Solving slope intercept form, free printable easy beginners algebraic problems for 3rd grade, free algebra with pizzazz answers worksheets.
Eighth grade permutations, how to complete the spuare by factoring generator, lesson plans college algebra quadratic equation discussion, Finding LCM.
Algebra with pizzazz answers page 208, gcse factorising cubics method, Economic TI-83 programs, lin exponential prohram math solver.
Practice B Review for Chapter 8 Fluid Mechanics, graphing radicals ti-83 plus, free math homework answers, how to solve 2nd order LODE in MATLAB.
Multiply the square root of of 3 times the square root of 2, test of genius worksheet, math radical problem solver, prentice hall conceptual physics answers.
Grade 10 quadratic questions, solve second order equation, prentice hall biology online workbook.
Factoring complex numbers example, Simultaneous equations Practice questions, How to turn a fraction into a decimal/problem solver, module in Intermediate algebra, Pre Algebra pizzazz worksheets, difference between algebraic expression and an equation.
Online factorer, aptitude sample questions with answers, algebra 1 prentice hall, solving non-linear ode, elementary analysis ross homework solutions.
ANSWERS PRE ALGEBRA WITH PIZZAZZ, how to enter a quadratic formula into ti-89, square roots of numbers up to 600, factorization online, Softmath (algebra-help.com).
Free College Algebra Help, best algebra 2 books, algebra 1 worksheets, problem and solution of synthetic division.
Online test in maths for 8 yrs boy, quadratic formula with two variables how to order, online algebra workbook, operations on quadratics, algebra worksheets foil and reverse foil.
Mathematics structure and method course 2, free 9th and 10th grade math games, add a distance formula program to ti 89, prentice hall course 2 workbook answers, simplifiying expressions calculator, log2 , ti 89.
Algebra 2 radical equations solver, circle paper math, solving polynomials online, graphing linear equation in matlab, free nc eog practice test online, give example of subtraction of algebraic expression.
How to turn a decimal into a radical, what is the hardest math question, the development of coordinate plane system in math, example of math trivia about algebraic expression, GED percentages and fractions.
Prentice hall: physics workbook problems., Multiplying algebraic expressions answers, percent as a fraction in simplest form, pre-algebra fraction phrases, equations with negative exponents, scale math calculations.
Non-homogeneous system first order differential equation, holt online algebra book, factor of a number, solving vertex using x-root, mcdougal littell algebra 1 chapter 3 resource book answers, Free Fraction LCM Calculator, printable linear equation quizzes.
Ks3 mathematics homework pack E: level 7, solve square root of x cubed, find the mathematics solutions for 7th class standard, solving equations with fraction exponents, wronskian method of solving non homogenous equation, boolean algebra factoring software, use graph f to solve graph.
Grade seven learning tutorials, trinomial equation rules: using square function, what calculator is the best for an algebra exam, finite math for dummies.
Multiplication of radicals solver, linear equation solving program in java, solving quadratic equations in matlab, Glencoe Algebra 2 Free Answers, mcdougal ALGEBRA 2 even answers.
Summation notation calculator, Scott, Foresman The University of Chicago School Mathematics Project answer to Project#3 Ch.7, greatest common factor monomials calculator, algebra ks3 solver, real life uses for quadratic equations, ptolemy algorithm squareroot.
History anwsers, Given the graph of a linear equation: each point on the graph is a solution of the equation and each solution of the equation will be a point on the graph., greatest common factor finder, worded problem in logarithmic, how to reduce a fraction to lowest terms on TI-83, how to use percent key ti 89.
Examples of 8th grade square and square roots, two step equations with negative one worksheet, Ways of getting the equation, applied algebra software.
Multiplying like terms with exponents, cube with ti-83, pdf+accounting book, grade 9 applied algebra questions online free, online caculator that converts fractions to decimals, Intermediate Algebra Worksheets, root as an exponential expression.
Math poems about integers, how to solve -7x -2y = -13 in algebra, free worksheet for slow learners, order fractions least to greatest calculator, Accounting book, free Download , .pdf, biology principles & explorations holt rinehart and winston midterm review.
Convert Square root to radical form, free accounting books download, square root equations solver, second order runge kutta matlab.
Taks math problems, how to factor out a cube root, expression of polynomial of two variables, simplify the radical expression calculator.
Glencoe mcgraw-hill algebra 1 9-4 Answer key, simplifying algebra equations, 3. order solution, alegebra 2 for dummies.
Math transformation/translation worksheet, ti-89, solving production equations, glencoe mathematics algebra 2 chapter 5 answers teachers edition, McDougal Littell Algebra 2 Answers, solve my polynomial.
Ti 84 downloads, online rational equation calculator, simple aptitude maths with question & answer, ti 86 graph square root of binomial, Factoring a Perfect Square Trinomial calculator.
Adding fractions with variables practice, online factoring, decimal calculation, ninth grade math: Range, free download of intermediate algebra tenth edition by Lial, common denominator algebra.
Adding exponentials, mixed numbers in MATLAB, what is stones in mathmatics?, year 11 maths help, hardest mathematics problems, Symbols which help in setting maths papers fractions.
Multiplication and division of rational expression, cube and cube root games, Free Online Algebra Problem Solver, worksheets for algebraic expressions, solve algebra expressions online, runge-kutta to solve second order ODE, non linear inqualities (absolute value.
Casio differential simplify, dividing algebra, percents and proportions problems sheets, solve for variable worksheets, simultaneous equation free online calculator, activities on exponents.
Convert .55 into a fraction, laws of probanility(maths), math transformation +5th grade.
Algebra Poems, algebra problems pdf, rules for graphing inequalities, 8th grade common square and square roots chart, application in advance algebra.
5th grade algebra, cubed root on calculator, adding negative numbers worksheet, how to simplify numerical radical expressions, least common multiple calculator of polynomials, ti 84 plus calculator interpolation, writing expression of exponents.
Solving quadratics using the square root method, Grade nine math practice tests, online radical simplifier, popular search engines for solving graduate level polynomials and ideals problems..
Converting decimal 1.0 to Fractions answer, The easiest way of changing unlike fractions into like fractions by L.C.M, abstract algebra helper, polynomial to the order of 3, Formula of a number to percentage, solve inverse functions on a free online graphing calculator, hardest math problem to solve.
Linear equations matlab, extraneous solutions solvers, simplifying radical expressions algebra 2 absolute value, homework.com radicals in fractions.
How do you know when to use the quadratic equation?, 10th Grade Math Formula Sheet, least common demonitator calculator, how to find slope on ti 84, free 6th std maths problems.
PROBLEMS HOW TO SOLVE DERIVATIVE WITH ANSWERS, solve algebra, free online Algebra solver, SQUARE ROOTS,EXPONENTS, excel square route, solving quadratic equations using perfect square roots, add a distance formula program to ti 89 titanium.
Balance equations calculator, free print outs for fractions grade one, 8th grade algebra symbol logic, dividing polynomial solver, find the domain of rational expressions when the dominator is 4.
How can i calculate the x root with in ti 83 calculator, "download ti 82 ROM", topic 7-b: test of genius book c pizzazz, ti-83 roots program.
How to solve traits on square chart, find rrots of equaiton 4 variables, holt pre algebra workbook answers, algebra substitution.
Greater common denominator 3 numbers, trigonomic calculator download, Math for dummies, power point presentation about dividing polinomials, quadratic function interactive, how to solve three equations simultaneously with excel.
Free printable mat workbook, mixed number to decimal, program, download algebra book, Learning Basic Algebra, calculator for lcd rewrite fraction.
Square root simplified radical form, Adding and Subtracting Integers Worksheet, how to do permutations and combinations worksheets with answers.
Algebra distributive property fractions, square root rules, creative publishing math with pizzazz.
Exponent joke math, making algebra easy, algebra vertex form.
Quadratic factoriser calculator, how to solve for x on your graphing calculator, SEVEN HARDEST EQUATION, download rom TI-86, Algebra and Trigonometry: Functions and Applications (Classic Edition) by Paul A. Foerster pdf ebook, algebra 2 math answers, free decimal chart to use with 3rd graders.
Finding the square root trigonometry, expanding algebra solver, trig help + difference quotient, square root 30+6 square root 10 in root form, simultaneous equation matlab, how to solve subtraction technical.
Factoring Trinomials calculator, algebra transforming formulas, changing numerical expression into english pracite math problems free online, algebra software, reduce rational expressions lowest terms + calculator.
Algebra progams, Converting mixed numbers to decimals, calculator solve trinomial equation, quadratic equation from data solve.
Square root with exponents, casio calculator rom images, graphing and writing inequalities worksheet, plus minus sign on the ti-84 plus calculator, grade 8 algebra download.
Difference square, problem solving in dividing fraction, algebra equation with an exponent.
Easy beginner algebra tricks, calculator practice worksheets, example "word problem" answer geometry, algebra (suma), algebraic multiplication of fractions calculator, solving fraction exponents.
Chapter 4 pre-algebra practice workbook, simplifying exponents calculator, 8th grade algebra ratio problems, electronic Math test questions, equation solver steps free.
Simplfying 12-3(-2)+4=, Examples How to Solve Differential Equations, sample of clep college algebra, a quadratic equation with the sum of roots is 2 and the product of its root is 5, seventh grade formula chart, ti83 operating system download, how do i find answers to my college algebra.
Vertex form to standard form calculator, worksheet for teachers simplifying radicals polynomials, program to reduce fractions, hungerford abstract algebra chapter 8 solutions, free accounting books online, factoring with cubes, quadratic linear equation calculator.
Dividing algebra solutions, online maths tests gcse standard, exponential base 10 on ti-89, solving, production possibility frontier excel template.
Java + divisible numbers, downloadable math games gr.9, simultaneous equation with complex coefficients ti-89, fourth grade online practice exams, answer my algebra questions, algebra,negative numbers,gradenine.
Maths tests print off ks3, finding domain TI-83 Plus, a system of three equations with four unknowns, ti 89 plus converting to scientific notation, list of fourth roots, lowest common denominator of 2/5.
Simplifying ratio calculator, square root solver, factor polynomials Ti apps, algebra calculator that shows properties in steps, algebra 2 homework solver.
Convert -2x^2 -12x -25 into Vertex form, Algebra II solutions booklet, FOIL method algebra solver, greatest common factor calculator 5 numbers, simple linear equation sums for class 6, free online scientific calculator to find the degree and percentage in a circle graph, free lattice math worksheets.
Free + math for 1st graders, Holt Algebra 1 Worksheet Answers, pythagoras calculator, math teks, solving multiple equations in excel.
Solve a system of three quadratic equations in three unknowns on a calculator matrix, linear elimination calculator, free sats papers ks3 english, rudin chapter 2 solutions, square root formula.
Quadratic formula imaginary root, simplifying square roots exact graphing calculator, mixed fraction as a decimal, test sheet answers for 4th class power engineering, how do i solve multivariable polynomials equations, ucsmp Algebra Chapter 6 part 2 Study Guide answers.
Video tutorials explaining factoring two quadritic equations, matlab second order differential equation, rules for adding and subtracting logarithms, Simplifying complex radicals, ti roms download, Math on Percentages for Dummies, how to multiply square roots without calculator.
Test of Genius topic book C, C-78 math help, ti 84 emulator, free easy beginner algebra tricks, math trivia;algebra, factorization quadratic calculator.
Graphing polar equations online with the coordinates, algebra problems solved using C program, 9th grade math printouts, example of mathematical trivia, free worksheets on finding rate of change.
Equivalent decimals 6th grade definition, free examples of adding, subtracting, multiplying and dividing decimals numbers, how to solve in terms of x square roots, sixth grade algebra worksheets, simplifying radicals with fractions.
Gcse questions on completing the square, permutation TI 89, what are 3 integers for the number 51, flow chart math algebric eqaution, 9th grade statistics project.
Tricks to obtaining lcm, math equation printable project Algebra, factoring algebraic expressions with fractional exponents, algebraic calculator, tan subtraction formula.
Algebra 2 Answer Keys, factoring polynomials x cubed, Hands on Equation calculator.
Algebra solver 2, Most hardest number math questions, square root exponent solver, ks2 sats maths revision books to download free, quadratic formula with square roots, adding square roots multiple terms, inter first year maths 1A free important questions ap.
Free 7th grade texas history worksheets, factoring algebraic expressions raised to powers, standrad form to vertex form, difference of two squares worksheet, INTEGERS EXAM, purchase from Holt math.
RADICAL ANSWERS/ALGEBRA, mcdougal littell geometry book, maths online free quiz for gcses, fractionsl mathwork.
Division worksheets for third grade, WHAT I CAN DO about problems with colleges, venn diagram and subsets online calculator.
Solving story problem Equations By Substitution, fractions worksheets+4th grade, pre-algebra problem solver, system of nonlinear equation ti-89.
Louisiana prentice hall mathematics algebra 2 answers, glencoe algebra 1 workbook practice answers, 2/3 in base 8, how to find square root of fractions, 1ST GRADE MATH SHEETS, how to get a common denominator algebra.
Scott foresman and co slope of line lesson masters 7B, how do I program the distance formula into my calculator, how do you write a hyperbola in a calculator, linear equations math cheats.
Convert mixed number to decimal calculator, hardest math puzzle in the world, simplifying rational exponents with fractions, Who invented the factorial?, solving simplified radical form.
Algebra cube chart, how to put a mixed number to decimal, free download of aptitude questions and answers.
Polynomial solver, Lowest common multiple of 34 and 19, third square root 0.0656, downloadable ti-84, free algebra calculator.
Algebra math sums, analytical solutions of simultaneous linear equations with two unknowns, free Algebra Equation Calculator, free Prentice Hall Mathematics Algebra 2 online textbook, prentice hall mathematics algebra 1 workbook answers, Basic Absolute Value Worksheet Math.
Beginners guide to algebra, wa college savings, "free online worksheets" and "solving equations" and " fractional coefficients", free answer calculator on algebra 2, variables in exponent.
Multiply integer by radical, solve algebra problems, Using radical expressions in real life, www.fractions.
Maths video+area, matlab "differential equation" high order, multiplication trivias for Grade 1, mcdougal inc math chapter 7 answers, algbra 1, practice test with the answer for algebra in 9th Grade, When a polynomial is not factorable what is it called? Why?.
Factoring out an equation, trigonomy help, phase plane diagram ti 89, Algebra tutoring online free, Balancing Chemical Equation Solver.
Rounding addition estimation worksheet, cross product solver, Download Algebra Structure and Method Book 1, step by step on balancing chemical equations, y6 sats questions mental maths free online, free online math tests gr8.
Simple math equations worksheets, easy pre algebrafor idiots, first grade math homework, factor calculator multiply, variables worksheet, conceptual physics workbook answers, TI 83free graphing calculator.
Maths square and cube formulas in algebra, download algebrator, matlab solve initial value differential, radical expressions with fractions.
Free problem solver FACTORING ax2+bx+c, 6th grade math- Dividing and multiplying fractions, Prentice Hall and pre-algebra and "greatest common divisor", who invented algebra and when, creative publications cheats, square root simplify calculator.
How to convert standard form to vertex, how to do a fraction square, learn algebra online for free, 9th grade math game, printable algebra worksheets.
My maths year 8, calculator for finding the slope of a graph, how to find square root, exponential equation cubed roots.
Grade 10 math Exam : preparation for elimination solving,convert measurement,area and perimeter,slope,equation of a line,simplifying factors,factoring,linear functions,linear systems,parabolas, algebra 2+chapter 5 practice workbook+divide using polynomial long division, McDougal Littell cheats, algebraic equations in everyday life, free mathematics software for 5th and 6th class, conversion with variable fourth grade worksheet.
Common denominator calculator, slope on graphing calculators, quadratic formula, how to complete addition under radical.
Mcdougal Littell Geometry even solutions, graph equation help, how can you get the answers to the problems in the Algebra 1 book., graphing linear equations worksheets, ti 92 base log, factor expressions with ti-83.
Online t89 calculator, grade 9 linear algebra worksheet, free math worksheets for Middle School Using Formulas Distributive Property, solve my radical equations, free circle charts aptitude questions, Solving quadratics using decomposition, rewriting roots as exponents.
Mathematical equations 6th grade, pizzazz angles answer sheet, Aptitude question and answer.
Solve by factoring cubed, mcdougal littell answers, examples of math trivia mathematics word problems, Conceptual understanding of addition of fractions, a key for algebra word problem solver, algebra powers calculator.
Balancing formula equations online, algebra exponent calculator, simplifying a radical expression.
Free printable worksheets equations primary, free online radical expressions calculator, instructional math worksheets.
Mathematics 7th Class all Formulas, "linear programing" TI-Interactive, algebra 1 resource book, cimt Y8 travel graphs.
Squaring a franction, quad root formula, math +trivias, factor equations online, simplifying logarithmic functions, ti-84 plus tutorial.
Algebra using substitution, simplify expressions 9th grade math, how to solve differential equations in matlab, simplify algebraic expressions calculator, formulas for easy problem solutions.
Tarsia puzzle for collecting like terms, online interval notation calculator, Grade 8 Math + adding + substracting fractions + worksheets +Ontario + English, graph a hyperbola, First Grade Math Test Plus Three, free step by step algebra calculator.
Mcdougal littell algebra 2 practice workbook answers, calculator convert whole numbers to decimals, holt online math workbook, how to solve algebra problems, algebra printouts.
Parabola calculator, free problems to solve using exponents, solve range domain, pizzazz worksheet answers free, How to put in x and y values in ti 83.
GCSE higher quadratic sequences, operations with integers game, "Hungerford Abstract Algebra solutions", glencoe/McGraw-Hill algebra 1 answer.
2 step fraction equations, Kumon math worksheets.com, solve algebra x^ - 5 - 7 +35, how to find LCM key on TI-83 calculator.
Adding,subtracting, multiplying, and dividing integers, quadratic dividend, linear expression remainder, 8th grade math sheets, least common denominator calculater, holt pre algebra answers, how to simplify square roots with division.
Algebra 2 free online tutor, online math online worksheets, Algebra Buster download, division solver for polynomials, Algebra II Graphing vertex, how to solve logarithm.
Scale factor in 7th grade math, online ti-84 emulator, solving 2nd order PDE i.c. only matlab, free science quiz for 10yr olds, solution set calculators, free online math tutor, simplifying advanced expression solver.
Lcm expression calculator, how do you solve a fraction with a radical in the denominator, fourth grade lesson plans for compare and order fractions, grade 11 chemistry practice exam questions free, prentice hall algebra 1 answer key, download calculator Ti-83+, free cheats for geometry soultion problems.
Free Math Answers, California Algebra 1 Student's Edition (Prentice Hall Mathematics), how to find the missing number of multiplying fractions.
Simplify 3 times the square root of 2 to the third power, hardest math equation, calculate particular solutions second order differential non homogenous, maths aptitude test papers, how to graph lines in standard form, math investigatory, estimating the limit on a TI-86.
How to slove and find the difference quotient, yr 11 maths online, download ti-83 calculator rom, on line slope calculator, calculate polynomial using perfect cube formula, factoring trial and error solver, write each decimal as a fraction or mixed number in simplest form 3/10.
Ti 84 simulator, Solve Apps: Subtraction of Real Numbers Online Calculators, online graphing calculator with stat feature, equations for percentage, first grade homework worksheets, answers to saxon algebra 1, convert decimal to radical.
Balancing equation solver, Simultaneous Quadratic Solver, variable LCM and GCF calculator, dividing polynomial calculator online, multiplying fractions- negative and positive, prentice hall mathematics algebra 1 answers.
How to foil a cubed polynomial, algebra downloads, fraction formula, gcf finder ti-84, I am having a hard time with finding graphical root finder, square root property.
Convert mixed number percent to fraction, solving quadratic equations by using the square root method, i need help with my algebra 1 homework online, ONLINE ALGEBRAIC FRACTIONS SOLVER, differential equation second order homogeneous linear, real life simple permutations, middle school math with pizzazz! book c, topic 2-g lowest term fractions.
Green globs help, vertex form of a quadratic equation, all prentice hall mathematics algebra 1 answers for free, Algebra Trivias for 2nd Year Highschool.
Quadratic equations by factoring calculator, math 208 syllabus assignment for 2009 phoenix uop, Input tables + algebra worksheets.
Egyptian quadratic equation, free accounting books, put quadratic into vertex form.
1st grader help sheets for math, combining like terms activities, algebra 1 workbook answers online.
Prentice hall mathematics pre algebra answers, solve simultaneous equations, rudin analysis solutions download.
Rational expression problem solving, strategies for dividing basic facts, math worksheets for Middle School Using Formulas Distributive Property, Glencoe/McGraw-Hill chapter 6 algebra worksheet, how to solve a signed fractions, mental math worksheets on linear equation, simplifying expressions with rational exponents.
Trig chart, teach me maths for free, fractions for dummies, Beginner's Guide to Algebra I, practice math tests graphing substitution and solving for y, holt physics study guide solution manual.
Difference quotient: linear, grade nine math practice worksheets, Gre aptitude and reasoning question papers in pdf format for free download, online factorising.
Algebraic equasions, alebra made easy, ti 89 manual converting fractions to decimals, factorise online, 4-8 algebra and trigonometry structure and method book 2 worksheet answers, automatic quadratic functions and equations for graphs solver Math, Ratio Formula.
Inequalities using distributive property worksheet, completing the square under a radical, Free printable Algebra worksheets, vertex form to factored.
Radical problem solver, MATH TRIVIAS(TRIGONOMETRY), download the free sol manual of introductory linear algebra (7th)edition, free math b testing online, algebra free solving solution.
Cube root of fractions, An online maths test 11+, ti-84 solving absolute value, teach me basic algebra, rule for sequence of square gnomons.
Free step by step algebra help, free print out of place value system for second graders, laplace's equation+literal explanation, hardest math problems, ti 183 calculator, algebra help fractional eaquations calculator, solving for trinomials.
Balance chemical equations using fraction method, Order of Operations 6th grade math puzzle, accounting books free download, cheat sheet for gr 11. math exam, download free casio calculator, need help understanding algebra and geometry.
Calculator exponents, balancing equation calculator, font statistics equation, absolute value and roots square, louisiana prentice hall mathematics algebra 2, solve the system of nonlinear second order differential Equations by using Matlab give an example, radical divider calculator.
Online polynomial root calculator with complex number solutions, slope of a line worksheets in math, gcse test papers on algebra.
Solving exponents with several variables, base 5 decimal to base 10 calculator, algebraic expression with area calculator, solutions to exercises rudin real and complex analysis.
How to do exponents without the finding the slopes, ti84 quadratic, to find greatest common factor on calculator TI-83, integrated algebra 2 study cards, how to solve decimals and fractions, free advance accounting book,ppt.
Convert Fractions to Decimals Tutorial, simultaneous equations on the TI 84 calculator, area to mass ratio formula.
Logarithmic Function worded problem, prentice hall science book answers, worksheet for gcse algebra, algebra expression calculator, solve square roots and radicals.
Mcdougal littell algebra 2 answers, answers to precalculus textbook third edition, free college algebra for dummies, connected mathematics homework answers, summary on linear equation in two variables, solving basic equation tables.
How to convert a decimal number into a mixed number, Matlab and coupled differential equations, convert base 10 to 16 5 digits past radix, graphing calculators with variables, expressions calculator.
Math made easy fraction worksheets, proof by induction for dummies, algebra fractions calculator, add scientific notations practice worksheet.
A poem about algebra, what are decimal places explanation and def, equation for solving nonlinear differential equation, the worlds hardest maths test, impossible algebra equations.
Greatest common divisor tool, second order differential equation solver matlab, trigonometry worksheets, excel solver to solve 4 equations 4 unknowns.
Solve equations by the square root property, combing like terms examples, poems that are using numbers, free online algebra 2 calculator, trigonometry for idiots, download ti-84, solving systems of equations with ti-89.
Iowa algebra test guide for practise, math trivia about geometry, online calculator fractions to decimals, quadratic equations in solving problem on the numbers, properties of exponents calculator, adding and subtracting integers with manipulatives, college algebra self help.
Ti-84 plus,difference of two squares, simplified radical form calculator, TI-83 system solving program, free SAT worksheets for 8 grader, practice solving basic pre-algebra equations.
Roots and radicals jeopardy, convert fractions to decimals calculator, practical problems using polar equations.
Free fourth grade worksheets, how to solve algebraic expressions, algebra with pizzazz by creative publications.
Math worksheets slope intercept, square root variable on top, 6th grade math worksheets of factor trees to work on, nonlinear iterative solutions in maple, Free Algebra Problem sovlers.
Algebra 2 notes permutations and combinations, rules square roots, calculator note TI-84 rule L2, 7th grade pre-algebra worksheet set theory.
Grade 9 math practice tests, high school 9th grade free help, divide decimals by a one-digit integer, gr9 maths, gcf finder ti-83, Linear algebra done right solution.
Download aptitude :pdf, evaluate the expression, softwere for maths, cross reducing fractions worksheet, rational exponents solver, java program to find sum of numbers from one to n.
Percent circles printouts for free, algebra 2 problems showing how to work them out, free algebra word problem solver, modern chemistry Section Quizzes with Answer Key holt, maths for dummys, free online parabola calculator, simplifying higher nth roots.
How to simplify a cube equation, order fractions from least to greatest, middle school transformation worksheet, "Simplifying Radicals" + PPT, sats questions on cube nets, 6TH GRADE TAKS MATH HANDOUTS.
Solving a polynomial equation, basic algebraic cheat sheet for high school students, algebraic free online calculators to solve the lcd.
Holt algebra 1 worksheets, how to solve formulas with exponants, algebra solver radical solve, multiplying exponential fractions, Printable Math Sheets, cost accounting free books, www.fractions.co.
Least Common Multiple Calculator, graphing linear systems advantage, highest common factor of 110 and 154, factoring 3rd grade.
Factoring trinomial that are cubed, poems with mathematical terms, factoring with ti-83.
Convert base 8 to base 10, linear algebra printables, nth root negative number inside parenthesis, simplifying radicals games, Free Algebra1 Homework Help, trig calculator.
Solving systems by substitution generator, sixth grade greatest common factor worksheets, College Algebra Solved software with bill me later, chemical equation finder, erb practice tests.
Free 2nd grade Symmetry Math Lessons, mcdougal littell math answers, solutions rudin chapter 7, "trigonometric identities" "sums and difference" applet, algebra problem solver show steps.
2y squared - 5 y cubed - 6y squared + 7y cubed, download aptitude Question and answer, "RHOMBUS"filetype.ppt, old, rearranging equations and formulas worksheets, ti-84 plus college algebra, prentice hall pre alg.
Maths exercise year 11, ny 6th grade math worksheets, ks3 maths homework help with linear functions, pre-algebra promblem solvers, holt math practice worksheets.
Solving differential equations containing square root of fraction, cubed polynomials, indian primary mathematics workbook for free download.
Algebra, factoring polynomial cubed, maple solve iteration.
Factor Trees in Math worksheets, how do solve linear equations with decimals, multiplication expressions.
Simplify complex expression, Chicago Transitional Math 8-2 worksheet, free math worksheets graphing slope intercept, solving polynomial in excel, tenuate, algebra lessons for beginners, Understanding grade 10 algebra.
Method to convert percents into fractions, chemistry of life worksheet answers, printable linear equation worksheets, math games + lessons + add + subtract + square roots, solving nonlinear ode, TI-89 solving a cubic equation, linear function equations for dummies.
Orleans-hanna Algebra prognosis practice test for 7th grade, examples of trivias, simple algabra, learn algebra online free, Algebra For Beginners, adding fractions strips, boolean logic solver .
Tricks to solve Abstract Reasoning Tests, how to solve cubed equations, first order linear differential equations initial-value problem.
"College Algebra with Trigonometry" free download, exponents with variable, algebra homework helper, factorising simple binomial expression questions, java how to determine an integer is divisible by a number, simplification calculator.
How do you balance equations steps, ti 83 solving linear systems, ti-89 boolean algebra, simplify radical expression calculator, cost accounting download.
Division of algebraic expressions calculator, solving binomial fractions, sample promblems of binomial expansion.
What is the proper way to write a number with a decimal point that never ends, holt physics mixed review answers, "subtracting mixed numbers" "for dummies", aleks algebra 1 review, i don't understand algebra 1, a math square roots generator, simplify calculator.
Math worksheet partial sum, free cost accounting knowledge books, online equation solver step by step.
Java square root math, prentice hall mathematics florida, How is a percent proportion related to an equation?, Free Algebra Calculator.
Simplifying rational expressions free caluculator, algebra 2 calculator: how to take off scientific notation EE, AJmain, radicals in algebra calculator, subtracting integers poblem, algebraic translation quiz prenhall.
Balancing equations maths, free math worksheets 11th grade, grade 5 fraction test, Exponential Functiom Worded Problem, adding radicals calculator.
Ged math worksheets, radical worksheets with answers, sample of math trivia, sample games in algebra, Pre Algebra Algebra combining, radical worksheet for grade 10.
Factoring with variables, LCM cheat sheet, formula for slope linear equation quadratic TI 84.
Using the solver on TI-83, solving systems by substitution calculator, multiplying and dividing powers of the same number with calculator, simplify monomials math game, Cost Accounting Exam Questions, fraction in ti-89.
Signed integers worksheet, College Algebra calculation, steps in balancing a chemical reaction, algebra help using algebra tiles, algebra II study guide, algebrator and imaginary numbers, free math answers.
Math problem solver factoring expressions completely, middle school math with pizzazz book d answer key, adding and subtracting positive and negative fractions, algebraic fraction calculator, understanding expression of square roots, how to solve algebra 2 questions, subtracting square roots with variable.
Ladder method greatest common factor, linear functions pre algebra 6th grade, how to turn fractions in to deciamals, application in math algebra.
Graphing linear inequalities worksheet, answer key for algebra 2 book from prentice hall, multiply, subtract, add, divide fraction worksheets, decimals as fractions in mixes numbers.
How to see intermediate steps of solution in mathematica, functioning quadratic equations calculator, prentice hall worksheets math.
How to solve gauss approach math, procedure of symbolic method, dirac delta ti-89, book answers to bittinger 4th edition, adding of multiple integers.
Least common multiple calculator, calculating slope on the TI-84, mathematic test sheet for grade 1, how to answer precalculus word problems, equations on how to get volume, grade 8 algebra book download, power point to teach math slope.
Free math tutor online, simple radical form, prentice hall mathematics algebra 1 all-in-one student workbook version a teachers edition, solving radicals with variables in them, font for algebra, online maths test paper of class seventh, worksheet with rotations about the origin 8th grade.
Java code for square root table, complex trig expressions, Least Common Denominator calculator, Rewrite the expression using rational exponent notation practic.
Finding real number solutions of trinomials for free, free answers for holt Algebra, maths practice papers for year 6.
Mcdougal littell math 7.5 answers, matlab code for equations fractionnales, pre algebra explanation.
Powerpoint presentations on completing the square, maths area worksheet, slant asymptotes "from graph", cube root simplification.
SUBRACTING EXPRESSIONS CALCULATOR, math trivia questions, Quadratic relationship definition, What is the Highest Common Factor between 25 and 39, algebra 1 book homework cheats, Algebra Master.
Algebra 1 holt florida teachers edition, solving simultaneous equations using factoring, "ti 83 Plus" + radicals + programs, Word questions on alegebra Y7, subtracting rational expressions calculator, determining the equation in standard form given points, roots, or vertex.
Factoring rules "grade 9", geometry ohio edition glencoe mathematics answers, cube root function scientific, TI-83 multi root solver, expressions + primary math +printable, college algebra help, percent and ratio formula.
ALGEBRA CALULATOR, percentage cool math, completing the square of a fraction, free quanitative apptitude solved papers download, how to factor polynomials with a ti-84.
Multiplying Fractions by using the distributive property, math trivia with answers, 5 grade statistics tutorial, The solution to the quadratic equation What is the base of the numbers?, quadratic formula activities.
Solving first order differential equations using laplace transforms, T-83 online calculator, free download algebra solver, 8th grade math formulas and equations.
Free online homework for ks3, exploring solids/geometry/interactive lesson, solving equations by factoring calculator, solving nonlinear differential equations with ODE45 MATLAB, pre-algebra with pizzazz worksheet, least common denominator solver.
How to convert mixed number to decimal, algebra motion problems, how to solve 6th grade equations, factoring cubed equations, factoring a quadratic calculator online.
Equation factoring calulator, trivia on algebraic expression, steps on solving a least common denominator with they are different.
Graphic calculator cheats, algebraic simplifications demoninator, algebra math word questions for grade sevens.
Equations with fractional coefficients, converting decimals to fractions worksheets, how to put an equation into a graphing calculator, online math games for 10th graders, math worksheets square root, difference permutation and combination.
Factoring expressions with fractional exponents, solving 2nd degree differential equations in matlab, solving probability with ti 83 plus.
Add divide times ratios, download solutions manual for linear algebra done right, online slope calculator, power engineer 5th class exam questions, graphing calculator function pictures, solve equations involving radical expressions+ti 84, Aptitude Question with Answer.
Intermediate algebra online tutor, exponential expression chart, fifth grade adding negative integers, algabra solver.
Solving algebric ratio problems, domain of quadratics, online problem solver explanation, solving non-linear systems mathcad, multiplying variables with fractional exponents, math 208 syllabus for 2009 phoenix uop, slope equation solver.
Scientific notation worksheets, trig identity solver, greatest common factor power points, elimination equation calculator, Explain the addition/subtraction property and the multiplication/division used to solve an equation with one variable, fraction exponents algebra explanation.
Basic graphing equations, worksheet over simplifying exponential expressions, free answers to math homework, make vertex eqyation to standrard form, equations with a variable in the denominator, HOW TO SOLVE A ALGEBRA EQUATION.
MATH SOLVER FOR STUPID PEOPLE, complex calculator online, cubic root 16, algebra checker, holt physics worksheets, step by step answers to algebra structure and method book 1.
Difference between power and quadratic equations, glencoe mc graw hill algebra 1 answer sheet, equation to get a percentage, if you square the denominator of a fraction.
Rational roots calculator, math trivia about circles, laplace transform calculator.
English aptitude test papers, least common multiple calculator, solving second-order equation, math test ks3, linear function in business maths+ppt.
Simplified radical form, finding scale factor, year 3 maths work sheet, algebra problems, highest common factor activities, algebrator download.
Simplify and evaluate algebraic expressions worksheets, trig substitution integral calculator, teach me how to do algebra free.
Expressing fraction as decimal, beginning algegra, tenth edition, best "algebra 2" book.
Algebraic properties worksheet, math poem using algebraic expression "algebra homework", "integrated algebra" & "workbook" printable, convert mixed number to decimal, define or give an example of associative and commutative math problems for second graders, class ninth s maths sample papers.com, solving chemical EQATIONS.
Learn algebra fast online, algebra 1 littell chapter 7 homework worksheet, math worksheets for 6th grade on angels.
Online program that solves for x, hard linear equations, simplifying algebraic expressions calculator, online balancing calculator, how to solve third order polynomial, alabama power aptitude test, is aleks algebra 1 equivalent to honors algebra 1.
Anyone pass the clep and use online courses, lesson plans on dividing polynomials, solving nonlinear second order differential equation.
How to solve algebra problems on a ti-83, online trinomial calculator, what is 8% as a decimal, factoring cubed, online factoring calculator.
Ninth grade algebra/free printable worksheets, holt physics online, sample difference quotient problems with solutions, gcd algorithem decrement, non linear nonhomogeneous second order differential equation, free trig calculator, answers to texas alg 2.
How to factor 32y squared plus 4y minus 6, vertex algebra, solving fractions using cramer's rule, Partial Sum Addition, Quadratics and Radicals calculators, free math sat test papers, free equation calculator with substitution algebra.
Conceptual Physics: The High School Physics Program answer, download introduction to probability models solutions manual ross 8ed, steps for sovling linear equalities.
Problems with understanding algebra, java expressions exponent, algebra substitution practice 2C=1Q and 2C=1M, ask jeeves negativie numbers on a calculater, saxon math answers algebra 2.
Converting fractions to simplest form, how to solve a quadratic equation by using a T-chart, adding and subtracting negative fractions, college intermediate algebra tutors, simple substraction sums in algebra to work out, square root method, lesson plan radical expressions.
Sat "printable answer sheet", differential equations quotient, java code for linear equation solving program download, mcdougal littell math answer key algebra 2.
Algebra function lesson plans 7th grade, creative publications pre algebra with pizzazz, convert mixed fraction to decimal calculator, math- combinations middle school, algebra problem slover, how to find the positive and negative radicals.
Solve equation with fraction calculator, help me answer by typing word porblem in for algebra math word problems, nonlinear simultaneous equations newton raphson method, online simultaneous equation solver.
How to simplify i cubes, solving linear equations worksheets, how to graph limits on calculator, how to covert mixed numbers to decimals, how to factor a cubed equation, printable maths program yr 8, practice sheet math exam for grade 9.
Solving Equations involving complex numbers in Ti-84, how to turn decimal into fraction with radicals, free printable science balancing equations, online algebra calculator, how to solve a 2nd order differential equation, printable exponent worksheet, online graphing calculator with limits.
Non homogeneous diff equ partial, answers for algebra 2 workbook, factoring square cubes, polynomials cubed.
Glencoe geometry 2004 skills practice worksheets, printable multiplication sheets for 3rd graders, simplifying albebra for kids, pocket pc algebra software.
Solve quadratic algebraically, Evaluation of an expression looks for the value of the expression; solution of an equation looks for the value of what?, Free Homework Assignment Sheets, simplify logarithmic equations, The GCF of two numbers is 871, missing integers subtraction, differential equations calculator.
College algebra homework help, algebra 1 holt, pre-algebra with pizzazz page 179, examples of fraction with coefficients, how to rewrite decimal to mixes numbers, Solving quadratic equations by elimination, commo sqaure roots.
Square Root Calculator, solving simultanous quadratic equations, download Maths KS3 level 6-8, algebra with pizzazz answers, simple equations exponents, generating non linear sequences worksheet.
Log rearranging, mcdougal littell science grade 8 note- study guide, rules for simplifying radical expressions, graphing inequalities in excel, Math Worksheets "Grade 8" Linear Relations "Chapter 4", inverse operations worksheets third grade.
Prealgebra and Algebra free printable flashcards, how to solve radical expressions with fractions as exponents, mathematics test year6, algebra 1 for dummies, solved problems in statistics for grade 11, Problem Solving Questions for Fractions adding, subtracting, multiplying and dividing.
Contemporary Abstract Algebra files, maths problems for kids in 3rd class, non-homogeneous differential equations, math pie sign, nth term rule.
Algebra Equations Solver, Changing fractions with variables to decimal, solving quadratic equation by completing square method, 6th grade math percentage and conversion to pie graph.
Nonhomogeneous differential equations, unequal square roots, free online math two step equation calculators, download solutions of Discrete Mathematics and Its Applications 5th edition, An Integrated Approach. Prentice Hall. Eighth Edition.
Ti-86 error 13 dimension, simplify numerical radical expression, resolve a equivalent fractions, Onlinehelp with alegebra 1 prentice hall.
How to grade 10 algebra, how to solve binomials on ti 83, learn algebra fast, algebra 2 math solvers.
Differnce between prportion and linear equation?, math for 6th grade multiply decimal by whole number, notes about combining like terms, what grade is algebra word problem solver up to, rational expressions solver.
Answers to prentice hall mathematics algebra 1 workbook for free, ode23 ode45, how to solve powers as fractions, really hard algebra square root problems, solving second order differential equations by substitution, aptitude test questions with brief solutions.
Www.algebra1help.com, algebraic expression ( Math Trivia), scale factor caculator, Beginner's Guide to Algebra I ONLINE, Chemistry MAth skills worksheet, c program geometric apptitude papers, Online Algebra Calculators.
Pythagoras theorem printouts, rudin solutions chapter 7, grade 9 math questions, square root of two plus the square root of eighteen, Prentice Hall Pre Algebra Cheats, how to convert a mixed fraction to a decimal.
Mathematics trivias, Algebra With Pizzazz, creating a java program to determine if integers are divisble by 3.
Quadratic foiling, solving equations involving factorial notation, How to solve percentages with variables.
Algebra powers, basic monomials 8th grade, graduate level algebra geometry software, algebra word problem help + beecher, create an example of a real life word problem which can be solved using algebraic inequalities. Write the problem and then solve the problem. Show the algebraic inequalities that are involved including all necessary steps taken to get an answer..
Free rational expressions solver, +mathamatics exercises, grade 11 math exercise, LCM worksheets, solving fraction powers, completing the square grade 10, how do I enter a cube root on a TI-89 calculator?.
Math Slope Solver, who invented unlike denominators, tutoring software, lesson plan for franctions 1st grade, using TI-83 for slope.
How to solve a system of 2 differential equations, impulse function ti 89, quadroot java, dividing algebra equations.
Dividing using distributive property, simultaneous equation online calculator, how algebra helps with critical thinking and problem solving, math solver functions difference quotient.
Solving Radicals, algebra factoring help online free, Order, exponents simplify expressions, divide polynomials calculator, factoring cube made easy, formula for finding ratios.
Math poem intermediate algebra, factor a cubed expression, free algebra Fx2 plus programs, solved equations using distributive property, sample problems for fraction pictures.
6th grade decimal printables, 4 equations 4 unknowns, math equations involving radius.
Algebra pdf, mathematical equations for statistics, ti89 plotting differential equations, free maths worksheets ks2.
Basic linear equation free help, puzzle pack cheat TI-84, solve the math problem software, SQUARE ROOT FRACTIONS, Ontario Grade Nine math exam sample questions, proportions with distributive property. |
An earthquake's Richter magnitude was originally defined to be the amplitude of shaking on a Wood-Anderson seismometer of an earthquake 100km away. Since there are very few working Wood-Anderson seismometers around these days, scientists approximate the magnitude using calibration scales based on the distance from the source and the amplitude of seismic waves.
delta comes off a table
and the moment magnitude is given by
The energy of an earthquake is proportional to the amplitude squared. In theory, the energy of an earthquake of surface wave magnitude n is equivalent to approximately 100 earthquakes of surface wave magnitude n-1. In reality, the energy is only about 30 times greater and the shaking is only about 10 times greater in intensity. Because of this seismologists can predict the number of smaller earthquakes that are likely to occur after a bigger earthquake using this equation where N is the number of earthquakes of a given magnitude:
most of the time b is assumed to be equal to 1; for b>1in an area it generaly means that small earthquakes occur frequently; for b<1, it indicates an area that is more prone for a larger earthquake. In volcanic areas where there's lots of earthquake swarms b>1 and b<1 along subduction zones and continental rifts where there's many large earthquakes with few aftershocks.
An earthquake is defined as a sudden slip event along a fault in the subsurface. The first shock reveals a good deal about the nature of the fault. Depending on whether a particular area is compressed or extended by movement along the fault, the first motion p-wave will be upwards (positive) or downwards (negative), respectively. The p-wave first arrivals can be plotted on a stereonet with two planes, drawn along great circles, separating the compressional (positive) p-waves from the extensional (negative) p-waves. These planes are known as nodal planes. Focal mechanisms present a "double-couple" in which either nodal plane is representative of the fault plane. To determine the correct plane, the locations of aftershocks and/or surface geology can be useful.
Let's assume we have a simple strike-slip fault.
Above is a two-dimensional model for the first-motion radiation pattern of a srike-slip faulting event. A. shows two undeformed blocks before faulting. B. shows the two blocks being deformed as the blocks are sheared to the right. C. sudden loss of cohesion between the two blocks along an E-W striking fault plane; producing an earthquake. The fault squished new material into the upper right and lower left quadrants causing compression. Poisson's ratio states that the volume of material must remain constant, therefore, the material in the two quadrants was uplifted to compensate for the compression. This is expressed as a positive first motion of the p-waves. In the upper left and lower right quadrants material was pulled away by the fault. This is expressed as subsidence of the material and a negative first motion of the p-waves.
where the dark circles are positive first arrivals and the open circles are negative first arrivals. Next a great circle is drawn that separates the two types of arrivals. The pole to this circle is plotted. The second circle should contain the pole as well as divide the two types of arrivals. The end result is something like this:
where the dark regions are compressional. This image is the characteristic first motion pattern for strike-slip faults. Here, the nodal planes lie along the E-W and N-S axes. Due to the "double-couple," this fault is either a vertical, E-W striking, right-lateral fault or a vertical, N-S striking, left-lateral fault.
The maximum and minimum compressive stresses can also be determined from first motion diagrams. The minimum compressive stress axis, also know as the T-axis bisects the compressional first arrivals and the maximum compressive stress axis or P-axis bisects the extensional first arrivals.
Normal faults and thrust faults also have characteristic patterns.
The pattern on the left is typical of pure normal faulting and the pattern on the right is typical of pure thrust faulting. By studying the geology one can determine that the left-most nodal plane is the fault plane for both the normal and the thrust faults.
For the normal fault pictured (Fig. 18.16), the P-axis can be represented by a dot at the center of a stereonet (a vertical line) and the T-axis at 45 degrees from the fault plane, bisecting the extensional first arrivals. For the thrust fault, the T-axes can be represented by a dot at the center of a stereonet with the P-axis 45 degrees from the fault plane, bisecting the compressional first arrivals. For the left-lateral strike-slip fault, the P-axis is parallel to the fault and the T-axis is perpendicular to the fault.
It is very unusual to find a pure strike-slip, normal or thrust fault in nature. Most faults consist mainly of one motion with a small component of another. Pure strike-slip faults have first motion diagrams with nodal planes that intersect in the center of the stereonet (as pictured above). Strike-slip faults that have some component of thrust or normal faulting intersect off-center.
This is either a left-lateral or right-lateral strike slip fault with a component of thrust faulting. The nodal planes for pure normal faults and pure thrust faults intersect along the primitive circle (as pictured previously). Strike-slip faults can have components of either normal or thrust faulting, but normal and thrust faults can only have components of strike-slip faulting. Here's the first motion patterns of an oblique normal fault and an oblique thrust fault with a small component of strike slip motion.
|The first motion pattern for thrust faulting with a small component of left lateral strike-slip motion would look the same (except the compression and extension circles would be reversed).| |
Every time value of money problem has five variables: Present value (PV), future value (FV), number of periods (N), interest rate (i), and a payment amount (PMT). In many cases, one of these variables will be equal to zero, so the problem will effectively have only four variables.
The four variables are present value (PV), time stated as the number of periods (n), interest rate (r), and future value (FV).
Also Know, what are the components of time value of money? In any time value of money relationship, there are following components:
- A value today called present value (PV),
- A value at some future date called future value (FV),
- Number of time periods between the PV and FV, referred to as n,
- Annual percentage interest rate labeled as r,
- Number of compounding periods per year, m,
Then, what are the 3 elements of time value of money?
Five Key Elements of Time Value of Money Situations
- ( n) Periods. Periods are the total number of time phases within the holding time.
- ( i) Rate. The rate is the interest or discount commonly expressed as an annual percentage.
- ( PV) Present Value.
- ( PMT) Payment.
- ( FV) Future Value.
What is time value of money with example?
Time Value of Money Examples. If you invest $100 (the present value) for 1 year at a 5% interest rate (the discount rate), then at the end of the year, you would have $105 (the future value). So, according to this example, $100 today is worth $105 a year from today.
What is the formula for present value?
Present Value Formula PV = Present value, also known as present discounted value, is the value on a given date of a payment. r = the periodic rate of return, interest or inflation rate, also known as the discounting rate.
What is Present Value example?
Present value is the value right now of some amount of money in the future. For example, if you are promised $110 in one year, the present value is the current value of that $110 today.
Why is present value important?
Present value is the single most important concept in finance. The less certain the future cash flows of a security, the higher the discount rate that should be used to determine the present value of that security. For example, U.S. Treasury bonds are considered to be free of the risk of default.
What are the reasons for time value of money?
There are three basic reasons to support the TVM theory. First, a dollar can be invested and earn interest over time, giving it potential earning power. Also, money is subject to inflation, eating away at the spending power of the currency over time, making it worth a lesser amount in the future.
What is TVM calculator?
Time value of money calculator (TVM) is a tool that helps you find the present or future values of a particular amount of cash received in the future or owned today. Time value of money definition – what is time value of money (TVM)
What is the concept of present value?
Present value (PV) is the current value of a future sum of money or stream of cash flows given a specified rate of return. Future cash flows are discounted at the discount rate, and the higher the discount rate, the lower the present value of the future cash flows.
How do you calculate total interest?
Multiply the total amount you borrow by the interest rate of the loan by the number of payments you will make. If you borrow $500 at an interest rate of six percent for a period of six months, the calculation displays as 500 x . 06 x 6 to arrive at a total interest calculation of $180.00.
What increases present value?
An increase in the discount rate decreases the present value factor and the present value. A decrease in the time period increases the present value factor and increases the present value. This is because if you have less time, you will have to set aside more today to earn a specified amount in the future.
How does time value of money help in decision making?
The concept of time value of money is important to financeial decision making for businesses and individuals. It includes the concepts of net present value and future value. We just used discounted cash flow to determine what a future amount of money would be worth today.
What is Rule No 72 in finance?
The Rule of 72 is a quick, useful formula that is popularly used to estimate the number of years required to double the invested money at a given annual rate of return. Alternatively, it can compute the annual rate of compounded return from an investment given how many years it will take to double the investment.
What is a simple interest rate?
Simple interest is a quick and easy method of calculating the interest charge on a loan. Simple interest is determined by multiplying the daily interest rate by the principal by the number of days that elapse between payments.
What is future value and why is it important to calculate?
Future value (FV) is the value of a current asset at a future date based on an assumed rate of growth. The future value (FV) is important to investors and financial planners as they use it to estimate how much an investment made today will be worth in the future.
What is the present value of an annuity?
The present value of an annuity is the current value of future payments from an annuity, given a specified rate of return, or discount rate.
How do you calculate monthly interest rate?
To calculate a monthly interest rate, divide the annual rate by 12 to account for the 12 months in the year. You’ll need to convert from percentage to decimal format to complete these steps. For example, let’s assume you have an APY or APR of 10% per year. |
Note to the Teacher:
In this lesson students will be comparing the quantity of protein consumed by different animals.
The lesson is designed at two levels.
Level 1: Students compare the amount of meat eaten by different animals and write and solve simple word problems.
Level 2: In order to make the comparisons more meaningful, students (with support) calculate the amount of protein consumed by different animals in proportion to their overall size.
Ask students to make an estimate about how many kilograms of meat the average person consumes in a year. They may also estimate in pounds, as this is still the more familiar unit of measure. I then have several students share with the class and I have the other students respond because this is a supported structure in which to have them practice MP3: Construct viable arguments and critique the reasoning of others. I teach this lesson in the 4th quarter so while they are still developing an understanding of weights and measures, they do have an idea about how to make a reasonable versus a preposterous guess.
A typical hamburger is approximately 1/4 of a pound. A reasonable guess would be anywhere from 10 pounds (equivalent of 40 hamburgers in a year, with NO other meat) to hundreds. A thousand or a million is, of course, unreasonable.
As we talk through their guesses, I help students multiply their guess for a day times the approximate number of days in a year (350).
I would remind them that kilograms are more than pounds, very approximately double, so whatever they put down for their guess in kilograms, the amount in pounds should be larger! A good benchmark for pounds/kilograms is an "average" adult who might weigh 150 lbs or 68 kilograms.
This is how I teach this lesson:
"There will be several steps to find the average amount of meat eaten by a narwhal per day. All of the math is based on rough estimates
The first question we need to answer is, what number should we use? 25? 50? Think… (A child will suggest that the number in the middle, or an average, should be chosen. If this is not suggested by a student, ask questions that lead them to that realization).
Go to step one on your Humans vs Narwhals - part 1 study guide. Finding the midpoint between two numbers is the same as finding the average of the two numbers. So, an alternative strategy here would be to add 25 + 50 and divide by 2. Either way, you obtain an answer of 37.5, which can be rounded up to 38 lbs of food per day for a typical narwhal.
Since the narwhal doesn't eat this amount every day, because it does most of its hunting and feeding during 9 months of the year, we have an extra step that I'll take you through. We will need to multiply the 38 lbs x the days in 9 months, then divide that by the approximate number of days in a year to get the daily average. An average looks at the whole year, and the narwhal doesn't eat much during the summer, so we have to bring that information into the equation.
Under step two a write the following equation: (I've provided kg and lb so you have a choice)
pounds 38 x (9 x 30) = n That will be the total amount the narwhal eats in a year in lbs.
kilograms 17 x (9 x 30) = n That will be the total amount the narwhal eats in a year in kg.
Help the students solve for that. Some of my students can decompose and solve it but it is okay to have them calculate the 9 x 30 and then use a calculator for:
38 (pounds ) x 270 days = 10,260 lbs. or 17 (kilograms) x 270 days = 4,590 kg
Now I want you to think of what equation we could solve to find the number of kilograms of meat eaten per day by a narwhal. If you have an idea, write the equation on the line under step two b. Don't worry about making a mistake. I just want you to think about this. There is an extra line where we will write the final equation together. (Hint, if needed, you will be using division).
Total weight of meat eaten in a year divided by 365 = amount eaten in a day!
4590 kg divided by 365 = 12.5 kg/ day (approx 28 lbs or 112 child sized hamburgers)
Now, what we need to keep in mind as we look at these numbers is that a narwhal is much bigger than a human. It most certainly eats more meat than we do, but does it eat more in relation to its size?
To figure this out, we will go through step 3. We will take the weight of the average narwhal, 1200 kg/ 2646 lbs. and figure out how much meat it eats for each 100 lbs. on this body. This is not third grade math, so don’t worry about it if you don’t understand it yet. We just need to be able to compare the amount the narwhal eats to the amount a typical person eats. Our typical person is 150 lbs. to keep everything in metric from here on, and to give us an easy number to work with, we'll say our human is 100 kg. That's a bit heavier than average but nothing out of the ordinary. A lot of the people you see walking around, especially tall people, are about 220 lbs.
So, for step 3, first we will take the narwhal's weight and find out what divisor we need to use to end up with a quotient of 100. (This is all in metric since that's the standard). Write this on your paper:
1200 kg divided by ? = 100 kg.
Yes, 12. 1200 kg divided by 12 = 100 kg.
Now, since we divided the weight of the narwhal by 12 to get our 100 kg that is our equivalent of our human contestant, we'll divide the amount of meat they eat in a day by 12 also. Otherwise the comparison would be out of balance.
So, write this on your paper: 12.5 divided by 12 = ? We know what 12 divided by 1 is. What do you think this will be? Think...
We can go ahead and write 1 kg of food for 100 kg of narwhal. (Actually 1.04 - close enough)!
There you go! We've solved our first problem! A narwhal consumes about 1 kg (2.2 lbs) of food for 100 kg of weight. That would be like a tall man eating about 8 small hamburgers, and nothing else, all day. (The idea I'm hoping to lead them to is that even though some of these animals consume an enormous quantity of meat, per their body size it's actually less than what we eat).
Note to teacher: Here are a few Polar Bear Notes to help you work through the second example."
Here is an additional page with two more examples that can be worked through with the class. The value in doing all of this together is that when they are give the numbers to complete the calculation on their own, they will have an about where the numbers originated. It's important that they don't just compare the quantity of meat eaten per day because without knowing the size of the animal that data is distorted and misleading. Humans vs Narwhals Part 2.
This was a very complex task and it is okay if students did not fully understand all the steps. They should be able to explain something they understood.
Write the amount of meat consumed by each of the example animals, narwhal, polar bear, mountain lion, and black bear up on the board. Ask them to make an observation about how much meat the omnivore (black bear) compared to the other 3 animals which are carnivores. Have them write their idea on the back of their study guide.
Then have students compare how much the average American eats (from the lesson More Meat!) with how much the carnivores eat and how much the omnivore eats. Remind students that our bodies are designed to be omnivorous. |
Percentage calculator makes calculations while you are entering numbers in the cells and the result is shown immediately. You can copy the result by clicking on the sum total. The calculations made may be saved, deleted, and adjusted in percentage calculator’s memory. Find percentage - Calculate the increase/decrease in percent. Calculation of percentage is an interesting part in the world of mathematics and obvious in every math classes. The percentage converter helps you with percent increase, decrease, differences, calculation and to figure out percentage.
Use Alcula's percentage calculator to compute percentages and answer questions such as: How much is 7% of 25000? What percentage of 10000 is 120? 250 is 8 percent of what amount? How much is 12000+8%; In the calculator window, choose the question you need answered and enter the 2 quantities that you already know. In the United States alone if the sigma level were between 3 and 4, there would be 50 newborn babies dropped per day and 5,000 incorrect surgical procedures per week. Not all opportunities and defects are created equal.
percent-calculator. what percent of 4 is 6. en. image/svg+xml. Related Symbolab blog posts. Practice, practice, practice. Math can be an intimidating subject. Each new topic we learn has symbols and problems we have never seen. The unknowing... Read More. Practice Makes Perfect. #3: Calculating with Percent e.g. 6 out of 8 is what % and 15 is 30% of what? Percentage Chart This Percentage Chart shows what 15% of $1 through $100 is although it is customizable so you can set the percentage and the numbers to whatever you want. According to the latest figures from the U.S. Census Bureau, 28,000 Vermonters, 4.6 percent of the state’s population, were uninsured at some time during 2017.
46 is what percent of 90 = 46 / 90 = 0.511111 Converting decimal to a percentage: 0.511111 * 100 = 51.11% How many degrees in 1 percent? The answer is 3.6. We assume you are converting between degree and percent. You can view more details on each measurement unit: degrees or percent The SI derived unit for angle is the radian. 1 radian is equal to 57.295779513082 degrees, or 15.91549430919 percent.
Results: The fraction 4/6 can be expressed as 66.66666666666666 percent or 66.7% (rounded to 1 decimal place). Percentage calculator online to find percentage of a number, calculate x as a percent of y, find a number given percent. How to work out percentage formulas. Percentage calculator online to find percentage of a number, calculate x as a percent of y, find a number given percent. How to work out percentage formulas. Solution for '4.8 is what percent of 6?' The following question is of the type 'P is what percent of W,” where W is the whole amount and P is the portion amount'. The following problem is of the type 'calculating the percentage from a whole knowing the part'.
Percent to fraction converter How to convert fraction to percent For example, in order to get a decimal fraction, 3/4 is expanded to 75/100 by multiplying the numerator by 25 and denominator by 25: The seasonally adjusted unemployment rate fell to 4.6 percent in the September 2017 quarter, down from 4.8 percent in the June 2017 quarter.
So, to convert this value to percent, we just multiply it by 100. In this example multiplying 4.6 by 100 we get 460 (the value in percent). There is an ease way to accomplish this: Step1: Move the decimal point two places to the right: 4.6 → 46 → 460. Step2: Add a % sign: 460%; So, 4.6 is equivalent to 460% in percent form. The rate of increase in consumer prices hit a fresh five-year high of 4.6 percent in May and monetary authorities said inflation remains a concern despite signs it is slowing down.
4 / 6 = 0.6667. Then, we multiplied the answer from the first step by one hundred to get the answer as a percentage: 0.6667 * 100 = 66.67%. We can prove that the answer is correct by taking 66.67 percent of 6 to get 4: (6 x 66.67)/100 = 4. Note that our calculator rounds the answers up to two decimals if necessary. Percent means per-hundred. Use that knowledge to solve problems like what percent of 16 is 4? If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. If an employer dismisses an employee who took his vacation early and received his indemnity, is the employee entitled to 4% or 6% of his wages when he leaves? Yes. The employer must pay him the 4% or 6% of the wages that he earned during the current reference year.
The percentage increase calculator is a useful tool if you need to calculate the increase from one value to another in terms of a percentage of the original amount. Before using this calculator, it may be beneficial for you to understand how to calculate percent increase by using the percent increase formula. Unemployment Rate Drops To 4.6 Percent, Lowest Level Since 2007 : The Two-Way The U.S. added 178,000 new jobs in November, according to the Bureau of Labor Statistics, which was about what was.
6/4 as a percent - solution and the full explanation with calculations. Below you can find the full step by step solution for you problem. We hope it will be very helpful for you and it will help you to understand the solving process. Here we answer What is 4/2 as a percent? and What is 4/2 as a percentage? Converting a fraction such as 4/2 into a percent is pretty easy. All you have to do is divide the numerator by the denominator and then multiply that result with 100 like so:
4.6% of 1100 = 50.60 4.6% of 1200 = 55.20 4.6% of 1300 = 59.80 4.6% of 1400 = 64.40 4.6% of 1500 = 69.00 4.6% of 1600 = 73.60 4.6% of 1700 = 78.20 4.6% of 1800 = 82.80 4.6% of 1900 = 87.40 4.6% of 2000 = 92.00 4.6% of 2100 = 96.60 4.6% of 2200 = 101.20 4.6% of 2300 = 105.80 4.6% of 2400 = 110.40 4.6% of 2500 = 115.00 4.6%... Grade is usually expressed as a percentage, but this is easily converted to the angle α by taking the inverse tangent of the standard mathematical slope, which is rise/run or the grade/100. If one looks at red numbers on the chart specifying grade, one can see the quirkiness of using the grade to specify slope; the numbers go from 0 for flat, to 100% at 45 degrees, to infinity as it. Quite simply, by training at a certain percentage of our 1RM we can target a specific training goal such as maximal strength, explosive strength, and/or speed strength. And as simple as it sounds, the entire concept of percentage-based training is based on the idea that a specific goal or desired outcome dictates a specific training percent.
Knowing how to calculate percentages will help you not only score well on a math test but in the real world as well. They are used for calculating tips in restaurants, finding out the nutritional content of your food, or even determining statistics of your favorite sports team. Concentration solution unit conversion between milligram/mL and percentage, percentage to milligram/mL conversion in batch, mg/mL per conversion chart. ENDMEMO.
Percentage calculator Percentage increase/decrease calculation. The percentage increase/decrease from old value (V old) to new value (V new) is equal to the old and new values difference divided by the old value times 100%:. percentage increase/decrease = (V new - V old) / V old × 100%Example #1 The average price of a movie ticket rose to $9.37 in the fourth quarter of the year. As domestic box office revenue ended on a down note in 2019 at $11.4 billion — falling 4 percent year-over.
This free percentage calculator computes a number of values involving percentages, including the percentage difference between two given values. Explore various other math calculators as well as hundreds of calculators addressing finance, health, fitness and more. pre algebra videos - integers, equations, and more
Converting a fraction such as 4/6 into a percent is pretty easy. All you have to do is divide the numerator by the denominator and then multiply that result with 100 like so: (Numerator/Denominator)*100. When you enter 4/6 into the above formula, you get (4/6)*100 which calculates to: 66.66666667%. We wish to express the number 4.6 as a percentage. So, to convert this number to percent, we should multiply it by 100. In this case, multiplying 4.6 by 100 we get 460 (the value in percent form). The ease way: Step 1: Shift the decimal point two places to the right: 4.6 → 46 → 460. How much is 6.4 percent of a number? Find a percentage of a number or calculate a percentage based on two numbers. How to find 6.4% of a number? Take the number and multiple it by 6.4. Then multiply that by .01.
4.6% Percent Calculator. Percentage of a number. percent of. Calculate a percentage. divided by. Use this calculator to find percentages. Just type in any box and the result will be calculated automatically. Calculator 1: Calculate the percentage of a number. The percent value can also be found by multiplying first, so in this example the 50 would be multiplied by 100 to give 5,000, and this result would be divided by 1250 to give 4%. To calculate a percentage of a percentage, convert both percentages to fractions of 100, or to decimals, and multiply them. For example, 50% of 40% is: Gross domestic product expanded 6.4 percent in the fourth quarter compared to the revised 6 percent in the third quarter and the 6.4 percent median forecast of 22 economists in a Bloomberg poll. This brought average GDP growth for the full-year to 5.9 percent, slightly below government's 6 to 7 percent goal.
Any percent can be changed to a fraction by dividing the percent by 100. Therefore, 625 % = 6.25. 25/100 can be reduced to 1/4, and there are four 1/4 per unit. Therefore, 6-1/4 = (4 X 6 + 1)/4. 6 percent of 4 is the same as 6 per hundred of 4. We can therefore make the following equation: 6/100 = X/4 To solve the equation above for X, you first switch the sides to get the X on the left side, then you multiply each side by 4, and then finally divide the numerator by the denominator on the right side to get the answer. To convert a percentage into a 4.0 grade point average, start by dividing the percentage by 20. Then, just subtract 1 from that number to get the grade point average. For example, if your grade is 89 percent, you would start by dividing 89 by 20, which would give you 4.45.
percentage points change is that percentage change in relation to the previous value (10% in our example and one percent of that is one hundredth of 10% = 0.1%), change in percentage points is in relation to the whole part (whole is the entire population or 1000 in our example. 1% of that is 10). 6/4 as percentage. 6/4 as percentage How to convert the fraction 6/4 to a percentage value. Please, input values in this format: a b/c or b/c. Examples: Four tenths should be typed as 4/10. One and three-half should be typed as 1 3/2. For mixed numbers, please leave a space between the the integer and the fraction.
6% Percent Calculator. Percentage of a number. percent of Calculate a percentage. divided by Use this calculator to find percentages. Just type in any box and the result will be calculated automatically. Calculator 1: Calculate the percentage of a number. The 2016 year's Fortune 500 list, released on Monday, includes just 21 companies with women at the helm, down for 24 last year.
About Ratio to Percentage Calculator . The Ratio to Percentage Calculator is used to convert ratio to percentage. Please note that in this calculator ratio a:b means a out of b. Example. Example: Convert the ratio 2:4 into a percentage: 2 : 4 can be written as 2 / 4 = 0.5; Multiplied 0.5 by 100, 0.5 × 100 = 50, so the percentage of ratio 2 : 4. 4. Write the percentages into the sectors in the circle graphs Think of fractions! 5. The circle graph at the right gives the angle measure of each sector of the circle. Find what percentage each sector is of the whole circle, and write the percentage in the sector. Remember, the whole circle is 360°. |
Angular Acceleration And Moment Of Inertia Report Examples
The purpose of this experiment is to elaborate how the angular acceleration of an object is measured. The report also explains how the angular acceleration of an object can be made constant. In addition, the investigation aims to establish if there is an association between the moment of inertia and probability distributions.
The radius of the stepped pulley and the plastic disks are measured using the vernier calipers. The hanging mass is lowered and the apparatus is observed to ascertain that the disk is accelerating and that the friction is minimum. Logger Pro is used to measure the linear acceleration for five different falling masses. The average acceleration standard deviations of the accelerations are also computed.
Moment of inertia
The pendulum's period is measured for small oscillations about the pivot hole drilled. Three trials are performed for five oscillations.
The width, length, and mass of the block and the height are measured. The width and length are used as the limits in the integral which defines the moment of inertia.
The derivation is used to compute the numerical value which is compared to the experimental value and the percent difference computed.
Experimental Acceleration m/s2
Average Acceleration = 0.043 m/s2
Standard deviation of acceleration m/s2 = 0.018382383
Moment of Inertia.
Experimental moment of inertia (kg*m2) = 0.0035
Theoretical moment of inertia (kg*m2) = 0.00276
Percentage difference = 21.1%
Newton's second law in rotational motion can be defined to be Torque = Iα.
When the angular acceleration is constant, the angular speed obeys kinematic equation that is identical to the straight line motion = 0 + αt. The 0 is the initial angular velocity and it is zero if the disks starts its motion at rest.
The assumption behind this experiment is to consider that the tension is only due to the hanging mass. Therefore, the analysis assumes a frictionless system with the weight of the vertical pulley being assumed to be zero. The linear acceleration can therefore be considered as a product of the angular acceleration and the radius of the pulley.
T – mg = -ma
For a massless vertical pulley, a = αrs
Therefore, Torque = rs T = Iα where, T is the tension, rs is the radius of the step pulley and I is the moment of inertia.
Substituting for the tension
Torque = rs (mg – mαrs) = Iα
Rearranging the formula,
a = mgrs2/ (I + mrs2)
The angular acceleration can then be obtained from the formula,
α = a/ rs
The experimental values of linear acceleration are compared to the theoretical values that were computed using the formula a = mgrs2/ (I + mrs2) for the masses of the different objects considered in the investigation.
The percentage error in the experimental value obtained can be computed using the formula (0.00546/0.043)* 100= 12.69767% The average difference between the theoretical value and the experimental value will be divided by the average experimental acceleration so as to obtain the error.
Comparison of the theoretical angular acceleration and the experimental angular acceleration would yield the same value since both quantities obtained by divided the linear acceleration by the same value rs
Moment of inertia.
The rotational inertia of an object is expressed by the second moment of the magnitude of the position vector from the axis with respect to the mass element. That is I = ʃ r2 dm
in the experiment we are considering a block and as such the differential element is two-dimensional since there will be a change in the length and the width of the block during rotation. The differential element of mass will therefore be replaced with the differential area element.
Thus, dm= σ dA where dA=dxdy. σ-represents the differential mass element divided by the differential area element.
The double integral is expanded with the limits of x being 0.179 and 0 and the limits of y being 0.091 and 0. integrating and substituting these values yields a value of 0.002756 kg*m2 which is the theoretical value of the moment of inertia.
The experimental value of the moment of inertia is computed using the formula,
T = 2 pie * √(I/Mgh) the period of the pendulum is observed and since the mass of the object is known and the distance from the center of object to the axis of rotation was measured, the moment of inertia can be obtained by rearranging the formula. The experimental value of the moment of inertia is obtained to be 0.0035 kg*m2.
The percentage error is computed using the formula (0.0035-0.00276)/0.0035 = 0.2114286 ≈ 21.14%
The experimental acceleration was measured as the object was dropped. The standard deviation is a measure of how far the individual values of acceleration were dispersed from the average value. The value of standard deviation observed in this experiment is within the expected limits considering that different masses were used and as such the mean value is not the average value of the replication of different measures of the same object. The relationship between angular acceleration and linear acceleration is employed in the study to present a method for measuring angular acceleration.
The moment of inertia of a rotating object can be expressed as a probability distribution. The second part of the experiment uses probability distribution to determine the moment of inertia of a rectangular block. The rotational inertia of an object is expressed by the second moment of the magnitude of the position vector from the axis with respect to the mass element. The experiment utilizes the relationship between the periodic time of the oscillation of a pendulum and the moment of inertia to compute the experimental value of moment of inertia.
The errors incurred in the experiment were mainly experimental errors. The acceleration of the object was only recorded once for every mass. Therefore were experimental errors which could be reduced by replicating the experiment preferably three times for every mass and computing the average. In the moment of inertia experiment, the number of replicates for the period of the pendulum was three and there is a great difference between he first two values and the third value.
The percentage error indicate that there is a high amount of experimental errors in the experiment and replicating the procedure would help reduce the percentage error in the experiment. The first two measures are tend too increase the experimental value of the moment of inertia. This is because the moment of inertia is directly proportional to the period of the pendulum and therefore an increase in the period of oscillation increases the moment of inertia.
The rotational inertia of an object is expressed by the second moment of the magnitude of the position vector from the axis with respect to the mass element. The angular acceleration of a rotating object is computed form the linear acceleration by dividing it with the radius of the step pulley. The angular acceleration is kept constant through the use of a uniform angular velocity throughout the experiment.
Please remember that this paper is open-access and other students can use it too.
If you need an original paper created exclusively for you, hire one of our brilliant writers!
- Paper Writer
- Write My Paper For Me
- Paper Writing Help
- Buy A Research Paper
- Cheap Research Papers For Sale
- Pay For A Research Paper
- College Essay Writing Services
- College Essays For Sale
- Write My College Essay
- Pay For An Essay
- Research Paper Editor
- Do My Homework For Me
- Buy College Essays
- Do My Essay For Me
- Write My Essay For Me
- Cheap Essay Writer
- Argumentative Essay Writer
- Buy An Essay
- Essay Writing Help
- College Essay Writing Help
- Custom Essay Writing
- Case Study Writing Services
- Case Study Writing Help
- Essay Writing Service
- Acceleration Reports
- Experiment Reports
- Inertia Reports
- Moment Reports
- Value Reports
- Element Reports
- Formula Reports
- Pendulum Reports
- Differential Reports
- Percentage Reports
- Standard Reports
- Difference Reports
- Axis Reports
- Block Reports
- Tension Reports
- Deviation Reports
- Vector Reports
- Width Reports
- Length Reports
- Magnitude Reports
- Motion Reports |
For the purpose of this example, the 9,732 runners who completed the 2012 run are the entire population of interest. So I'm taking 16 samples, plot it there. So we got in this case 1.86. This article is a part of the guide: Select from one of the other courses available: Scientific Method Research Design Research Basics Experimental Research Sampling Validity and Reliability Write a Paper navigate here
This approximate formula is for moderate to large sample sizes; the reference gives the exact formulas for any sample size, and can be applied to heavily autocorrelated time series like Wall When there are fewer samples, or even one, then the standard error, (typically denoted by SE or SEM) can be estimated as the standard deviation of the sample (a set of When the sampling fraction is large (approximately at 5% or more) in an enumerative study, the estimate of the standard error must be corrected by multiplying by a "finite population correction" If the standard error of the mean is 0.011, then the population mean number of bedsores will fall approximately between 0.04 and -0.0016. https://en.wikipedia.org/wiki/Standard_error
I'll show you that on the simulation app probably later in this video. These assumptions may be approximately met when the population from which samples are taken is normally distributed, or when the sample size is sufficiently large to rely on the Central Limit For example, the U.S. Take the square roots of both sides.
A review of 88 articles published in 2002 found that 12 (14%) failed to identify which measure of dispersion was reported (and three failed to report any measure of variability).4 The Assumptions and usage Further information: Confidence interval If its sampling distribution is normally distributed, the sample mean, its standard error, and the quantiles of the normal distribution can be used to The standard deviation is used to help determine validity of the data based the number of data points displayed within each level of standard deviation. Standard Error Regression And that means that the statistic has little accuracy because it is not a good estimate of the population parameter.
Larger sample sizes give smaller standard errors As would be expected, larger sample sizes give smaller standard errors. Standard Error Of The Mean Definition Graphs that show sample means may have the standard error highlighted by an 'I' bar (sometimes called an error bar) going up and down from the mean, thus indicating the spread, And if it confuses you, let me know. For the runners, the population mean age is 33.87, and the population standard deviation is 9.27.
NLM NIH DHHS USA.gov National Center for Biotechnology Information, U.S. Difference Between Standard Error And Standard Deviation As will be shown, the standard error is the standard deviation of the sampling distribution. I take 16 samples, as described by this probability density function, or 25 now. Correction for correlation in the sample Expected error in the mean of A for a sample of n data points with sample bias coefficient ρ.
This was after 10,000 trials. here Next, consider all possible samples of 16 runners from the population of 9,732 runners. Standard Error Of The Mean Formula When we calculate the standard deviation of a sample, we are using it as an estimate of the variability of the population from which the sample was drawn. Standard Error Vs Standard Deviation For some statistics, however, the associated effect size statistic is not available.
And, at least in my head, when I think of the trials as you take a sample of size of 16, you average it, that's one trial. check over here However, the sample standard deviation, s, is an estimate of σ. The term may also be used to refer to an estimate of that standard deviation, derived from a particular sample used to compute the estimate. Allison PD. Standard Error Of Proportion
The standard error estimated using the sample standard deviation is 2.56. We will discuss confidence intervals in more detail in a subsequent Statistics Note. The standard error falls as the sample size increases, as the extent of chance variation is reduced—this idea underlies the sample size calculation for a controlled trial, for example. http://nbxcorp.com/standard-error/what-is-known-as-standard-error.html This is usually the case even with finite populations, because most of the time, people are primarily interested in managing the processes that created the existing finite population; this is called
Repeating the sampling procedure as for the Cherry Blossom runners, take 20,000 samples of size n=16 from the age at first marriage population. Standard Error Of The Mean Excel Had you taken multiple random samples of the same size and from the same population the standard deviation of those different sample means would be around 0.08 days. So the question might arise, well, is there a formula?
The answer to the question about the importance of the result is found by using the standard error to calculate the confidence interval about the statistic. Gurland and Tripathi (1971) provide a correction and equation for this effect. This is expected because if the mean at each step is calculated using a lot of data points, then a small deviation in one value will cause less effect on the Standard Error In R However, the mean and standard deviation are descriptive statistics, whereas the standard error of the mean describes bounds on a random sampling process.
American Statistician. Standard Error of the Mean. Upper Saddle River, New Jersey: Pearson-Prentice Hall, 2006. 3. Standard error. One, the distribution that we get is going to be more normal.
Because of random variation in sampling, the proportion or mean calculated using the sample will usually differ from the true proportion or mean in the entire population. The graph below shows the distribution of the sample means for 20,000 samples, where each sample is of size n=16. A small standard error is thus a Good Thing. The standard error of the mean (SEM) (i.e., of using the sample mean as a method of estimating the population mean) is the standard deviation of those sample means over all
And then let's say your n is 20. Gurland and Tripathi (1971) provide a correction and equation for this effect. The data set is ageAtMar, also from the R package openintro from the textbook by Dietz et al. For the purpose of this example, the 5,534 women are the entire population When the sample is representative, the standard error will be small.
© Copyright 2017 nbxcorp.com. All rights reserved. |
31st of January 2018 has given, the Earthlings on certain parts of the planet, a rare combination of four celestial events all happening simultaneously, viz, a total lunar eclipse, a super moon, a blood moon and a blue moon. Except for the fact that it is not a super huge moon partially blue and partially red in colour. And nope, it is not bad at all to witness this even with naked eyes. This even does not indicate anything inauspicious. All it gives you is a dose of goose bumps for good.
So, let’s understand in simple terms without complicated mathematics, why it happens and what really happens beyond what meets our eyes. To make it easy to understand, let’s break the title of this post into a few simple terms and understand them individually.
Basics of celestial motion:
A massive celestial body has strong gravitational field. That makes other bodies sufficiently within its gravitational field get attracted towards its centre. However, the bodies that have high enough speed directed off centre, don’t really come down crashing to the centre. Rather they get accelerated in a way that the direction of motion gets continually bent towards the centre, like a stone tied to a string revolving around a pivot (or your finger, if you are the one swirling it). That path of periodic revolution is called an orbit. This is how the members of our solar system should be moving around each other. But the real motions happen with a slight difference. Instead of moving in a sweet simple circle, they move in individual elliptical orbits. That’s because every individual body is under the influence of multiple gravitational fields of the bodies around them. Thus they eventually find a balance and move accordingly.
Take two fixed points on a plane sheet of paper. Fix the two ends of a long string at those points, one end at each. Now, hold the string tight using a pencil and draw a curve so that the string remains taut throughout the curve. This curve is an ellipse. Every point on the curve has two distances, one from each fixed point. The total of these two distances for any point on the curve is the same as the total length of the string. This means that an ellipse is the tracing of all points on a two dimensional plane which are at a constant sum of distances from two fixed points respectively. There will be two points on this ellipse that are the farthest from each other, called the vertices. Similarly there will be two points that are the closest, called the co-vertices. The vertices form a line called the major axis and the covertices form minor axis. These axes are perpendicular to each other.
We all understand a circle fairly well. In a circle all points on the circle are at equal distance from the centre. That means whatever force moves a particle in a circle has a constant value along all the directions. When this force becomes directional as in space the huge gravitating bodies are not uniform, neither are they uniformly spaced around a centre, the revolving periodic motion (if any) of almost every celestial object follows an elliptical curve, called an orbit. And this object revolves directly around the object which has the strongest of the gravitational fields experienced by the revolving object. Instead of being at the centre of revolution, the driving object remains at one of the focal points or foci (fixed points of an ellipse as discussed above). Thus the driven (revolving) object periodically comes to a closest point (one vertex) as well as to a farthest point (the other vertex).
The Earth’s orbit around the Sun is elliptical with the Sun being at one of the fixed points called the focal points. The point on the major axis where Earth comes to the closest to the Sun (one of the fixed point) is called Perigee. The other vertex is then called Apogee, where Earth is at the farthest from the Sun. Earth crosses the apogee once every solar year (365 days on normal and 366 days for leap years) or sidereal year (365 days, 5 hours, 48 minutes, 45 seconds).
Similarly, for the moon, at its Perigee of its elliptical orbit around Earth, it comes the closest. Thus it also appears almost 14% bigger than its average sighting. This is called a Super Moon.
The super moon of March 19, 2011 (right), compared to a more average moon of December 20, 2010 (left), as viewed from Earth
As we have been common sensed it, a Full Moon comes once every month, i.e., 30 days consisting a lunar month. However, the concept of the lunar month is not purely applicable to the celestial lunar events. Rather, it is the synodic month (29.5 solar days). A synodic month is the period for the moon to come back to the same point along its orbit around Earth while all measurements are done with respect to the Sun. A lunar month is just an easy approximation to see things purely as they seem from Earth. In a lunar month the Moon has two phases as per its visibility to Earth, viz, waxing phase and waning phase. What really happens during these phases has nothing to do with Earth’s shadow casted due to the Sun. Rather, it is the amount of area of the moon that appears illuminated as seen from the earth.
Other than being in Earth’s shadow a few times (lunar eclipses), the Moon is always half exposed to direct solar radiation (sunlight). However as the Moon, during its revolution around Earth gradually comes between the Sun and the Earth, the part of its bright hemisphere visible from Earth gradually decreases. That’s the waning phase. It reaches its peak when the moon is totally in between the Sun and the Earth as seen from above the Sun, Earth and Moon projected on a two dimensional plane as they move celestially in space. As the only part of the moon facing the Earth is totally unlit by the Sun, the Moon having no light of its own, appears the darkest. Some call it a No Moon. But usually it is called a New Moon. Then comes the crescent moon and so on. When the moon begins to move out to a side (Earth’s side of course), it enters a waxing phase, means its bright hemisphere gets increasingly visible to us. This phase reaches its peak with a Full Moon. This means that considering Earth’s plane of revolution around the sun, as seen from above the plane, the moon is behind Earth. Thus its fully lit hemisphere is totally visible to the Earth.
Then why does it not fall in Earth’s shadow? Why not every full moon is a lunar eclipse?
Note carefully here, there are two simultaneous revolutions happening. One is that of the Earth around the Sun in a plane (say A). Another is the revolution of the Moon around the Earth in another plane (say B). Actually these planes A and B are inclined to each other by approximately 5 degrees. 5 degrees may seem very small but when projected over to the astronomical distances between the Earth and the Moon, it enables the moon to gain some height above the plane A, thus escaping Earth’s shadow by a consistently good margin to appear fully lit by the Sun (except for the lunar eclipses).
A Lunar eclipse happens when the Moon is captured by Earth’s shadow by the Sun during its revolution around Earth. This means, that The Sun (S), The Earth (E) and The Moon (M) are almost aligned in a straight line in the order S-E-M. This also means a full moon. Lunar eclipse is actually caused by Earth’s apparent motion when we treat the moon and the Sun as two fixed points. This apparent motion is simple. Apart from Earth revolving around the Sun, it also moves up and down crossing the line joining the Sun and the Moon. Lunar Eclipse happens during these crossings only if there is a Full Moon, i.e., the Earth is directly aligned almost exactly in between the Sun and the Earth. When not just almost but exact alignment happens we call it a perfect or Total Lunar Eclipse.
The phrase, “Blue Moon” might have come from a few sightings of the Moon where an otherwise Full Bright White Moon appeared with a bluish tinge. But that tinge is because of certain local atmospheric conditions where the components of white light must have been absorbed by the particles of the local atmosphere, leaving mostly the blue end of the spectrum (out of the VIBGYOR of white) free. Alternatively, it may have happened that the blue part of the spectrum of white light from the moon got scattered more than the rest of the spectrum, like the scattering of Sun light that makes the Sky sometimes appear blue. That’s really rare though for the moon to literally appear a bit blue. That’s why a rare event gets complimented by the phrase, “once in a Blue Moon”.
There is a less rare event when there are two Full moons in a lunar month. Well, it is rare because of us and not due to any celestial occasion. Remember that there is a difference between a lunar month and a synodic month? A lunar month consists of 30 solar days. A synodic month consists of 29.53 solar days. There actually is a Full Moon strictly once a synodic month. But as we take a lunar calendar month to be 30 solar days, there is an error we always ignore. As per our lunar calendar (even Gregorian), we have 12 months in a year. Every month has one lunar cycle. Thus in a year we expect 12 lunar cycles. The actual number of solar days for 12 lunar cycles is 29.53 X 12 = 354.36. Yet in our solar year we have 365.24 days. Thus every year we accumulate a lunar error of 365.24 – 354.36 = 10.88 solar days. Thus an extra full moon pops out of our approximated lunar calendar every 29.53/10.88 = 2.7 years (approx). Thus as we see, we can predict when the next blue moon will occur.
During a total lunar eclipse, especially during a Super Moon, the moon is totally shadowed by Earth. This means the Moon gets deprived of direct sunlight. Yet, due to Earth’s atmosphere, a part of the solar radiation near earth’s surface edge, gets slightly bent towards the Moon due to refraction. As we know from the dispersion of white light by a prism, the red end of the spectrum gets least deviated than the rest. This reddish orange part of the white sun light manages to reach the moon while the rest of the spectrum happens to miss it. This gives the moon a reddish or orange tinge. The actual tinge in any case depends upon the moon’s distance from Earth. As every colour from ROYGBIV takes increasingly narrower path enveloped by the red, between Moon’s perigee and apogee, its position during an eclipse determines which tinge of colour it takes and when. |
If nothing else, people can enjoy a great song.
This one puzzled us when we gave the 2021 Further Mathematics Exams a three-minute scan, but we didn’t bother then to think, um, further. Now, however, since graph theory is likely a mandatory part of Specialist 1&2, and thus also, at least technically, a mandatory part of Specialist 3&4, we have thought a little harder. So, after discussions with John Friend and with a colleague who we shall refer to as Professor Combo, here we are.
The following question appeared in the Network module on the 2021 Further Mathematics Exam 1. Have fun.
60 Replies to “WitCH 77: Road to Nowhere”
So… if we use the accepted definition of a graph being a collection of vertices and edges, this diagram is not a graph.
Or a “network.”
I’m guessing the intended answer was D since there are no “1-step paths” from any building back to itself and likewise none from J to M, K to N or M to N, each counting twice in the matrix for a total of 11 zeros.
Here is my interpretation (I’m no expert so I’m receptive to counter-arguments and debate):
The diagram is a MAP, not a graph or a network. The map shows how the pathways connect to the buildings and can be used to draw a graph. There’s a loop at K (because there is a direct route that leads back to K), a double edge joining K and L, and a double edge joining J and K (because there are two different direct routes to these pair of buildings). Note that there is direct route between L and J. I’ve attached my graph and the adjacency matrix. Each different route is an edge.
A couple of notes:
1) The pathways are NOT edges. Each different route is an edge.
2) Any confusion comes from thinking of the pathways as if they are edges.
3) Maths Quest SM1/2 has a similar example and question and makes the distinction between pathways and routes clear. The distinction is unclear in Nelson SM12. Cambridge SM12 has no similar example or question. But Cambridge FM34 does – Ex 14C Q8 and makes the distinction between pathway and route clear (one of the parts says to “Draw this map as a graph by representing towns as vertices and and each different route between towns as an edge.”)
I believe there are two mistakes in the question:
1) It should have been worded something like
“The below shows …. An adjacency matrix for the graph that represents this map is formed. …”
VCAA’s use of the word network falsely suggests -I think – that the diagram is a graph. It is not.
The above understanding leads to Option C as the answer.
2) Another possible interpretation is that you can add a vertex where the pathways intersect, then the map becomes a graph … This does not lead to any correct option. So another error in the question is that there are two different and reasonable models that can be used – one leads to Option C and the other leads to no option. Perhaps – one might argue – VCAA intended only the model leading to an option. The model I’ve used is the model given in the textbooks. They don’t mention adding vertices where the pathways intersect.
NB: This post is mainly to try and get my own thoughts straight on this question, so all feedback is welcome.
I took a quick look through the comments and didn’t see this, but my first model graph did have loops at each vertex. I mean, in the real world, I can go from M to M without issue, I just stand still. That graph has an adjacency matrix with not enough zeros.
I don’t think this is the best way to interpret the map they have given, and if I were to do the question I’d have to go with another interpretation that resulted in an answer from those given. I’d pick C or D. Problem is, if we can’t stand still and make a loop, then also we aren’t allowed to walk down a path and turn around and make a loop. So are we allowed to do the turns at K and make a loop? Are we allowed to turn almost 180 to make the J -> L path? These things are not clear to me.
Another question that is too ambiguous. Reminds me of a comment I just read suggesting that we could do with a few more pedants in the world…
My adjacency matrix had a ‘typo’ (no-one took the chance to have a free hit! And don’t say you were just being nice! Better to say you didn’t care!!) Attached is a correct copy (correction in red).
Yep… missed the loop at K.
Which makes 10 zeros and answer C.
I’ve always understood a and a to be interchangeable in VCAA-speak.
But that doesn’t mean anything.
Agreed Red Five, not a graph in the usual definition. But a “network”? Is there a formal definition?
A very good question.
The term is ambiguous, and is context-dependent, because there is the idea of a flow network (A digraph with a source and a sink) along with a capacity function, but in a variety of contexts, it can just mean a graph/directed graph with certain attributes. For instance, a network of connected devices, or a website and its connections to other web pages. You’ll have to take that up with the study design, because flow networks (and applications such as min-cut/max flow + matching) are in the module.
Which module, Sai? A FM3/4 module?
The networks and decision module was what I was looking at for FM3/4.
FM3/4 is above my pay grade. I think the Graph Theory in SM3/4 will be very different to what it is in FM3/4 – more theoretical (‘pure’) and more focussed on proof (including existence and non-existence proofs).
Thanks, Sai, although the fact that “flow network” is a thing doesn’t imply that “network” is a thing.
The real question is, is “network” a synonym for “graph”, or is it simply a vague term for any real world scenario that might be modelled by a graph? If the former, how does one answer the above exam question? On the other hand, if the latter, how does one answer the above exam question?
My understanding of the definition of a network is that it’s a simple, weighted, directed graph G satisfying:
a) There is exactly one vertex in G, called the source, having no incoming edges,
b) There is exactly one vertex in G, called the sink, having no outgoing edges,
c) The weight of the directed edge (i, j), called the capacity of (i, j), is a non-negative number,
d) The undirected graph obtained from G by ignoring the directions of the edges is connected.
So the diagram given in the question is definitely NOT a network. It is a MAP. VCAA screwed up.
So Marty, to answer the question, you have to realise that VCAA screwed up and meant MAP, not network, and assume the wording I suggested earlier. Then again, maybe VCAA has its own definition of a network that’s peculiar to FM3/4. It wouldn’t be the first time VCAA had made up its own definition …
John, your understanding of “network” is neither here nor there. It is a question of what the word means, if anything, in (Further and, now, Specialist) VCE texts and in (ha ha) the VCE study design.
The issue for the exam question above is does the word “network” imply anything substance, or is it just a vague scenario word? In either case, what are students expected to do, and what is the justification for that expectation?
My thoughts on this weird map:
This initially seems to resemble a multigraph (where parallel edges are permitted), but it’s downright impossible to discern what edge goes to what, as I initially perceived that there were two edges connecting K and J. The same issue occurs for J and L, it is hard to tell if there is an edge between J and L. In practice, it’s very reasonable to just insert an intermediate vertex there, or use a multigraph but not leave it as something not easy to interpret. For both reasons, it definitely isn’t a graph!
Thanks, Sai. It obviously isn’t a graph, even noting that “graph” in this context permits multiple edges between the same two vertices.
The question is, does the word “network”, or anything, somehow guide the student on how they are supposed to make a precise object? Does the diagram, in any mathematical or VCE-conventional sense, have a unique adjacency matrix?
I assume the answer is no, but given I don’t know the lingo or the subject, I figure it’s worth asking.
Cambridge General Mathematics Units 1&2 on page 454 uses the phrase “Graph or Network” which to me implies that the two words are interchangeable in the eyes of the author and publisher and therefore in the eyes of many teachers who use this text as their source of knowledge.
That does not make it correct of course.
On page 457 of the same textbook… “…a weighted graph is often called a network.”
So maybe the author changed their mind or is also confused about the difference.
There’s nothing that says the graph has to be ‘simple’, so multiple edges and loops is perfectly fine. What I drew is a perfectly fine graph that represents the routes between buildings shown on the given MAP. The adjacency matrix is unique, provided:
a) you exclude isomorphisms of the graph, and
b) you don’t insert extra vertices at the intersection points.
The Cambridge FM3/4 textbook, for example, simply defines a network as “a weighted graph where the weights represent physical quantities such as time, distance or cost.” VCAA have made very sloppy use of the word ‘network’. Marty, I’ll bet that VCAA have used the word ‘network’ exactly as you said above: “just a vague scenario word”.
As for “what are students expected to do, and what is the justification for that expectation?” …
1) Students have to interpret VCAA’s intent. As always. And it’s part of the job of the poor schmuck that teaches them to teach students that they will need to do this and how to do this.
2) There’s no justification. It shouldn’t be expected because it shouldn’t be necessary. There should be an expectation that VCAA can deliver an exam that does not have sloppy errors. Because there should be an expectation that the exam writers and vettors are competent.
btw the definition I gave for ‘network’ is the standard definition. If the word ‘network’ is used in any mathematical context, the definition I gave is its meaning. You’ll find this definition in any standard textbook on discrete mathematics.
Now, what are the chances that the new Stupid Design has a glossary of terms where things like graph, network etc. are clearly defined …? This question is symptomatic of many underlying VCAA problems.
Re: “The same issue occurs for J and L, it is hard to tell if there is an edge between J and L.”
Yes, it’s a bit tricky to see. But a careful inspection the MAP shows that there is a direct route between J and L and therefore an edge ‘obviously’ connects J and L in the graph representation.
2020 Exam 1 Module 2 Question 8 – Incorrect usage of the word ‘network’.
2019 Exam 1 Module 2 Question 6 – Correct wording.
2015 Exam 1 Module 5 Question 6 – Correct wording. This is how the 2021 question should have been worded!!
So VCAA get it right – but unfortunately the writer(s) and vettor(s) of 2020 and 2021 Module 2 lacked an understanding of the word ‘network’. Not an ideal quality when writing questions for a module called … and Decision Making.
From the back cover of “Introduction to Stochastic Networks” by R. F. Serfozo (Springer, 1999): “In a stochastic network, such as those in computer/telecommunications and manufacturing, discrete units move among a network of stations where they are processed.” The term “network” itself is not explained or defined in that monograph. The preface starts with the sentence, “The term stochastic network has several meanings.” (And it goes on to say which meaning the book adopts.)
In view of this, I am inclined to go with Marty’s offered second alternative to describe “network” (in his words) as “a vague term for any real world scenario that might be modelled by a graph”. It would seem strange to me that a stochastic network may exhibit, as per a reasonable definition such as in the above book, all kinds of weird set-ups while a network could not. (Perhaps I am too much under the sway of the idea that “stochastic X”, where “X” stands for anything, should be a version of “X” just with probabilities thrown in.)
Hi Christian. I’m going to put my head on the chopping block here (I’m no expert) and maintain that the word ‘network’ has a well-defined mathematical meaning – the one I gave above. This definition makes it clear that a network IS a graph, a graph with specific properties. The picture in the VCAA question is NOT a network in the mathematical sense of the word and should NOT be called a network. VCAA should have said something like:
“The MAP below …” or “The diagram below …” and it could have referred to “the system of pathways” or “the arrangement of pathways”. VCAA’s use of the word ‘network’ in this question was lazy, sloppy and ignorant.
I do not accept that in the world of discrete mathematics, network is simply “a vague term for any real world scenario that might be modelled by a graph”, as Marty puts it.
Re: Serfozo. A common definition for stochastic network that I’ve come across is:
“Stochastic networks are networks that vary over time with non-binary vertices that represent a probability for a link between two nodes.”
Does the word ‘network’ as used in network theory’, ‘communication network’ etc. lose it’s precise and nuanced mathematical meaning? Changing to something more along the lines of what Marty offered?
Regardless of this, in a Module called and Decision Making that is clearly an introduction to Graph Theory, one should and must expect much better than the word ‘network’ only having some vague meaning. Mathematics is based on precise definitions and language. This is something that VCAA and its of goons consistently fails on.
John, I think you’re pushing weirdly and way too hard for a single “correct” usage of the term “network”. I agree, obviously, that the VCE use of the term is vague and sloppy and inconsistent. But any word that is used in the real-world and the applied-world and the pure-world is going to have a variety of different and less-or-more defined and equally legitimate usages.
I’ll try to summarise the VCE usage in a later comment or update, but to return to the exam question, I don’t want to lose sight of the key point. The key point is, if “network” is somehow a formal mathematical object, enough to have an adjacency matrix, then the diagram/map/whatever included in the exam question is not a network.
The key point is, whatever a “network” might be, the exam question is screwed.
I’ll do this as a comment rather than a post update, since people are likely to have corrections and clarifications. Here is my summary of the use of “network” in materials for VCE that I have in hand.
GENERAL MATHEMATICS 1&2 STUDY DESIGN
a) The Matrices sub-topic (p 19) refers to “road networks” and “people in a network”, clearly colloquial usages.
b) The Graphs and Networks sub-topic (p 20) refers to “weighted graphs and networks”. The “and” suggests that the two terms are not synonymous, but that suggestion may just be clumsy wording.
SPECIALIST MATHEMATICS 1&2 STUDY DESIGN
a) The topic Graph Theory (P 46) refers to “social networks”, with no suggestion of a formal definition.
FURTHER MATHEMATICS 3&4 STUDY DESIGN (pretty much what Sai was indicating)
a) The introduction to the module Networks and Decision Mathematics (p 61) refers to “the use of networks to model and solve problems involving travel, connection, flow, matching, allocation and scheduling”
b) Various applications of “networks” are then suggested, without pointing to a clear definition.
c) In the sub-topic Shortest Path Problems, the SD refers to “the shortest path between a given vertex and each of the other vertices in a weighted graph or network”. Possibly this is intended to mean “weighted graph” and “network” are synonymous, but that is unclear.
CAMBRIDGE FURTHER TEXT
a) Chapter 14 is titled Graphs, Networks and Trees
b) Section 14A is titled Graphs and Networks but the word “network” does not appear in the text or the exercises.
c) The first use of “network” in the text is in 14D, titled Weighted Graphs and Networks. The text seems to use “network” exactly as a synonym for “weighted graph”, i.e. a graph with numbers on the edges.
d) The usage in 14D is reinforced in the Key Ideas summary of the chapter.
CAMBRIDGE SPECIALIST 1&2 TEXT (via GO only, because Cambridge are being dicks)
a) Chapter 27 is titled Graph Theory. The word “network” does not appear.
MATHS QUEST SPECIALIST 1&2 TEXT
a) Chapter 5 is titled Graphs and Networks. The term “network” is used often but is never defined. It appears to be used mostly in a colloquial sense, but occasionally also as a synonym for (unweighted) “graph”.
My (non-expert) five cents:
Re: “GENERAL MATHEMATICS 1&2 STUDY DESIGN …
b) The Graphs and Networks sub-topic (p 20) refers to “weighted graphs and networks”. The “and” suggests that the two terms are not synonymous, but that suggestion may just be clumsy wording.”
The standard definition for a network includes is “a simple, weighted, graph G satisfying: …”
The word ‘directed is very important’. So a ‘weighted graph’ without any further qualification is definitely not synonymous with a network. It would make sense to introduce weighted graphs first and then add conditions to define a network.
So the text in Cambridge FM3/4 seeming “to use “network” exactly as a synonym for “weighted graph”, i.e. a graph with numbers on the edges.” is incorrect on the part of the text.
I can also add that Nelson SM1/2 does not mention networks. But it is actually very clear about the difference between maps and graphs. Its chapter summary includes the following (and I quote):
Road maps and graphs
To draw a road map as a graph
* list the vertices: these may be towns or other landmarks
* list the edges: these are all the possible paths between the vertices.
A path from vertex A to vertex A is drawn as a loop.
* use the two lists to draw a graph
PS – Apparently a digital copy of Nelson SM1/2 does not exist.
Ugh! There is no “standard definition”, and if there were, it is largely irrelevant.
Marty, are you saying that there’s no standard definition of ‘network’ within VCAA? If so, I completely agree – I’ve looked at all the questions in the Network and Decision Making Module for all of the available VCAA Further Maths exams. VCAA’s consistent inconsistent use of the word ‘network’ over the past 15 years is astounding.
But … if you’re saying that there’s no standard definition of ‘network’ in textbooks on Discrete Mathematics, then I disagree. The Discrete Maths textbooks I’ve looked at have been consistent. But I’ll happily admit I’m wrong if two conflicting definitions are cited.
I think the definition is relevant. The meaning of a word and how that word is used is important.
If the meaning of ‘network’ and VCAA’s use of the word is not the issue here, I don’t see what is. For me the major issue is VCAA calling the map a network when it’s not. It’s not a network, it’s not a graph. I think we can all agree on that. But you draw a graph from the diagram given in the question.
I’ll also agree that there’s a second issue in that the map can be modelled in two standard ways (leading to two different answers):
1) the way I did it, and
2) adding vertices to the forks in the pathways.
What is the problem that you see with the question?
John, the word “network” in the exam question is probably an error but it is not the main issue. The main issue is that what is pictured is not anything that has an adjacency matrix. Moreover, as you indicate, there are two natural ways to model the diagram with a graph that *does* have an adjacency matrix, leading to two different answers.
That’s the real problem: the exam question is stuffed, entirely independent of the vague or incorrect language. The language issue, the meaning of the word “network”, is very secondary.
On the language issue, there are two separate aspects:
a) The use of the term in VCE materials
b) The use of the term more generally
I have stated clearly my view of both (a) and (b). In particular on (b), I wrote:
“But any word that is used in the real-world and the applied-world and the pure-world is going to have a variety of different and less-or-more defined and equally legitimate usages.”
Stop being a pedant.
So, if I was trying to get into the head of the person who wrote this question (which I’m not…) I would guess that their intention was:
1. Draw a picture which has some key points named.
2. Suggest that you produce a (I’ll stay away from the N-word for the moment) from the information provided.
3. Believe that said graph will have adjacency matrix.
4. Ask a question about said adjacency matrix.
If you read the question as a FM student trying to do well on an exam (and, for better or worse, that is the reason I’m engaging with the discussion…) then you see how Answer C make sense.
The other options do not make sense, so select Answer C and move on with your life.
(Your teachers’ won’t, but you can)
If I were trying …
I trying to improve my grammar, I promise…
Just teasing. I agree with your comment, and 2 was very funny. The only question is, is there, in VCE, a standard method of turning such non-graphs into graphs? I assume not.
The textbook writers write/vet the exams, and the exam writers/vet write the textbook.
This Circle of Stupidity means that, in the fantasy world of VCAA, there IS “a standard method of turning such non-graphs into graphs”. And that method is to do what is written in those textbooks.
Marty, we will disagree on this (but it might come back to bite you) – words, definitions, grammar etc important. It’s not being pedantic.
Marty says John is a pedant.
However, with a couple of well-placed commas … (Boom boom)
Jesus. You think I don’t know words are important?
If there were a few more pedants on the vetting committee (does such a thing really exist?) we would not have the problems we do now.
But this goes well beyond teachers, schools and textbook writers.
We’ve all (I assume, from reading this blog and the comments) gone down that rabbit-hole often enough to know the hole is very, very deep.
I worry a bit about this, because the last time I taught 3+4 Further (2004, for those playing “guess who” at home…) such a question (here is a picture, produce a graph) was really common on SACs.
But, I learned this from the teachers I worked with and so perhaps, in my own little bubble I worked under the assumption that everyone else teaching VCE thought the same.
When I took a holiday to teach IB, there was a standard glossary of terms for each subject. Maybe VCE could…
…yeah, OK… nevermind.
VCAB * did!
(See part of the 1988 Course Description Booklet for Mathematics here: https://mathematicalcrap.com/2022/01/18/specialist-mathematics-12-note-sharing-and-idea-sharing/#comment-13535 )
What we have seen over the last 40 years is an erosion of quality, exponentially accelerated by VCAA. (The erosion actually began once the VUSEB was replaced, but VCAB did OK).
VCAA’s Stupid Design is not fit to lick the boots of the VCAB Course Description Booklet. If VCAA truly wanted to give teachers a fit-for-purpose Course Description Booklet for Mathematics, as opposed to the Stupid Design that gets thrown at us, all it has to do is look to the past. But this won’t happen for at least the next 5 years, because some numbnuts at the VRQA
( https://www.vrqa.vic.gov.au/Pages/default.aspx )
elephant stamped VCAA’s piece of shit (nearly 3 months ago, but for some reason VCAA won’t make it available until we’re all back at school and won’t have time to dissect it) that we all have to live with. These bastards all scratch each other’s backs.
* VUSEB –> VCAB –> VBOS –> BOS –> VCAA.
On this, we most definitely agree.
“here is a picture, produce a graph” is all part of the VCAA ‘give questions a real life context no matter how stupid it might be’ imperative. (Which is why we get stunt cars … that have an infinite initial acceleration: https://mathematicalcrap.com/2021/11/11/witch-75-car-crash/ )
From the Daft Stupid Design (SM1/2):
“• examples of graphs from a range of contexts such as molecular structure, electrical circuits, social networks, utility connections…”
To which must clearly be added ‘road and pathway , especially roads and pathways with forks in them.
Which, I have to say, is still better than a stunt car.
For SM12, that is insane.
Very funny! (And what are you implying?)
Not implying anything. It just came to mind for some reason.
Some people who come from a non-English speaking background have a better understanding of grammar that those from an English speaking background. I could tell a good story about this – but not here.
This guy thinks it is just words depending on whether the context is either real or abstract
In an exam “pick answer C and move on” as RF suggests
So, if “a graph is what math people talk about” and a network is what “non-math people talk about”, does that mean this exam question had (at least) two authors?
Might explain a few things!
Indeed. And the mathematical competencies of the writers and vettors are in parallel, not series.
So, it is a NOR gate we are dealing with?
Not quite … two False inputs would give a True output.
I’m making an analogy to the equivalent resistance of several resistors all connected in parallel …: .
I feel a SAC question coming along…
“The resistance of the examiners to change is independent of any other examiner…”
I have taught students in General Mathematics about graphs/networks, published a couple of papers on graph theory, and come to the following conclusion.
Graph theory, as it is presented in VCE mathematics, is a litany of definitions and superficial applications. In a sense this is a characteristic of the early stages of graph theory. You need to spend a lot of time (say a semester) to get to something substantial. I have suggested elsewhere that graph theory should be deleted from the curriculum.
Terry, I completely agree with you. And I would add ‘proofs’ to your litany list, if the implicit inclusion of graph theory in SM3/4 is taken into account.
And yet, the Daft Stupid Design explicitly states that SM1/2 Graph Theory topic includes:
“examples of graphs from a range of contexts such as molecular structure, electrical circuits, social networks, utility connections and their use to discuss types of problems in graph theory including existence problems, construction problems, counting problems and optimisation problems”
What it means by “optimisation problems” is anyone’s guess since weighted graphs and networks are not part of the sillibus, despite the use of the word ‘network’ in “social networks” And what’s meant to be with all these examples is anyone’s guess. But there’s no mention of pathways (with forks) linking buildings or roads linking towns …
The inclusion of graph theory is farcical. I don’t how it gets taught in Further (if chosen from the optional modules), where there seems to be a lot more content.
Look at how it is in Further 3&4…
80 to 90% of the marks on any given exam are easy.
Sure, there is usually one quite challenging question at the end, but for a lot of students, the middle-level-difficulty questions in this module are, relatively speaking, easier than other modules.
In graph theory, a network is a directed graph in which weights (which need not be integers) are assigned to the edges. The main reason for talking about them at all in school seems to be to arrive at the max.-flow min.-cut theorem which is a key theorem in the study of networks. An excellent text on graph theory is: R. J. Wilson, Introduction to graph theory. Curriculum designers could do worse than use the definitions in Wilson. Here is a link to the 4th edition.
Click to access wilsongraph.pdf
Wilson’s book shows that there is a lot of material than can be studied before you ever need the concept of a network.
Interesting read Terry – thanks for sharing.
Now to find the time…
A few times I worked in Hungary where they have special summer schools in mathematics for enthusiastic high school students. I was told that they don’t invite Erdos to these schools anymore because he turns the students into graph theorists rather than mathematicians.
This might be a bit pedantic, but given Marty’s recent string of grammar lessons, I feel empowered to suggest another issue with the question:
adjacency matrix is formed. Not adjacency matrix is formed.
Does this not suggest that there are multiple possibilities for said matrix?
Based on that logic, no answer is possible, because the question setter does not indicate that they know adjacency matrix was formed…
Aha. In the interests of demonstrating that my loathing and detestation of VCAA is based on the mathematics and is nothing personal, I will defend VCAA on this particular score. If you go right back to the top of the blog, you’ll see that the adjacency matrix I constructed is based on the columns (and rows) being labelled in the order J K L M N. There’s nothing special about this order except convenience. Use a different order and you get a different adjacency matrix (that is ‘equivalent’ to mine). However, regardless of the order, the structure of the graph is (obviously) maintained and so there will still be the same number of zeros.
So it does make sense to ask for adjacency matrix rather than adjacency matrix.
Fair play to you then.
Gol dang, y’all do a lot of off the beaten track stuff. I mean it’s super cool and all. But I actually never learned that stuff, even later in life. Maybe would impact me if I’m designing a telephone grid or doing research for Google. But it sure doesn’t affect the thermodynamics of a steam cycle.
Again, I’m torn. Not sure if mashing this stuff into earlier grades is just the way to go or if the kids would be better off drilling the classical stuff and doing an elective in graph theory in uni if they need it. Or looking at a Numberphile video on it or the like.
Really, I’m not sure. I can kinda see the appeal of what the bloodsuckers are doing. Just hope that they don’t mess up the kids ability to do multistep algebra, so they can handle hydraulic and electrical circuits (not complicated, just need automaticity). I mean nodal analysis for EE actually really involves solving simultaneous linear equations, not graph theory.
But…grr.. CAS!? How the eff are they going to follow the solution of the Schroedinger equation for hydrogen atom if they have to push the I believe button instead of following the algebra? |
« ΠροηγούμενηΣυνέχεια »
drawn to the sides. Prove that B, E, F, C, are concyclic, i.e., lie on the circumference of a circle.
53. AB is a chord of a circle whose centre is C, and DE is the perpendicular let fall on AB from any point D in the circumference. Prove that the angles ADE, BDC are equal.
54. AB is a fixed chord in a circle APQB; PQ another chord of given length. Shew that if AP, BQ meet in R, R will ie in the circumference of a fixed circle for all positions of PQ.
55. AOB, COD are two diameters of the circle ACBD at right angles to each other. Equal lengths OE, OF are taken along OA, OD respectively. Shew that BF produced cuts DE at right angles, and that these two straight lines, when both produced, intercept one-fourth of the circumference.
56. Through P, one of the points of intersection of two circles APB, APC, a straight line BC is drawn at right angles to AP; BA, CA meet the circles again in Q and R. Shew that AP bisects the angle QPR.
57. If two opposite sides of a quadrilateral inscribed in a circle be equal, the other two will be parallel.
58. ABCD is a parallelogram: if a circle through A,B aut AD and BC in E and F respectively, prove that another circle can be drawn through E,F,C,D. Find the position of EF when the two circles are equal.
59. The circumscribing circles of the two triangles spoken of in Book I., Theor. 20, are in all cases equal to each other.
60. The middle points of all chords of a given circle which pass through a fixed point lie on the circumference of a certain circle.
61. Through one common point of two intersecting circles a diameter of each circle is drawn. Shew that the lines joining the other ends of the diameters pass through the other common point.
*62. P is any point on a circular arc APB. Shew that, the bisector of the angle APB passes through a fixed point, and that the bisector of its supplementary angle passes through another fixed point.
63. The perpendiculars from A and C to the opposite sides of a triangle ABC intersect in E, and BD is the diameter through B of the circumscribing circle. Prove that AE is equal to CD, and that AC,ED bisect each other.
64. Shew that the circumscribing circles of the equilateral triangles described on the sides of any triangle and external to the triangle meet in a point.
65. Upon the sides of the triangle ABC the equilateral triangles A'BC, B'CA, and C'AB are constructed, the vertices A and A' being on opposite sides of BC, and so of the rest. Prove that the lines AA', BB, CC' meet in a point.
66. Prove that the points determined in Exercises 64 and 65 are the same, and if that point be D, that AA' = BB' = CC' = DA + DB + DC.
67. A circle is described on OA as diameter. Any line through O meets the circumference in P and the perpendicular to OA through A in Q. Shew, by Book II., Ex. 19, that the rectangle OP, OQ is constant. Also enunciate and prove the converses.
68. Squares are described on two sides of a triangle as bases, and on the third side as diagonal. Shew that the three circles about these squares have a point in common.
69. If a hexagon ABCDEF be inscribed in a circie
the sum of the angles A, C, E is equal to that of the angles B, D, F.
Enunciate and prove a similar theorem for any rectilineal figure with an even number of sides inscribed in a circle.
70. Two parallel chords AB and CD are drawn in a circle ABCD, whose centre is O. The chord CD is bisected in F, and a circle is described through A, O, F, cutting the given circle again in G. Prove that B, F, G lie in a straight line.
75. Of all triangles which have a given base and vertical angle, the greatest is that in which the two sides are equal.
72. Two circles intersect at the points A and B. In the circumference of one of the circles ABC any point P is taken, and the straight lines PA, PB (produced when necessary) meet the circumference of the other circle at the points Q, R. Shew that the chord QR will be of the same length whatever may be the position of P.
73. Through one of the points of intersection of two equal circles, each of which passes through the centre of the other, a line is drawn to intersect the circles in two other points. Prove that these points and the other point of intersection of the circles are the angular points of an equilateral triangle.
74. Through each of the points of intersection of two circles straight lines are drawn, the one meeting the circles in points A and B, and the other meeting them in points C and D. Prove that AC is parallel to BD.
75. If two triangles have their bases, areas, and angles at the vertices equal, they are equal in all respects.
THEOR. 20. Of all straight lines passing through a point on the circumference of a circle there is one, and only one, that does not meet the circumference again, and this straight line is perpendicular to the radius to the point.
Let A be a point on the circumference of the circle ABC whose centre is O, DAE the perpendicular through A to the radius OA:
then shall A be the only point in which DE meets the circumference.
Take any point F on DE, and join OF.
Because OA is perpendicular to DE, therefore OF is greater than OA;
therefore F is without the circle ABC;
III. 1, Cor. therefore the straight line DF meets the circumference of the circle ABC in the point A only.
Again, let GAH be a straight line through A not perpendicular to OA:
then shall GH meet the circumference in a second point:
Draw OK perpendicular to GH;
1. Prob. 3. from O draw OL to meet GH at L, and making the angle KOL
1. Prob. 5. equal to the angle KOA.
Then, because OL and OA are equally inclined to OK the perpendicular to GH, therefore OL is equal to OA; that is, OL is equal to the radius of the circle ; therefore L is on the circumference;
III. 1, Cor. that is, the straight line GH meets the circumference in a second point L. |
The basis of accounting is accounting equation. It is necessary to understand its meaning to be able identifying how transactions impact accounting equation and financial position of the entity.
Properties (can be material, immaterial, monetary) owned by the entity are called assets. All the assets of the entity are financed by different means either owner’s means, which are called equity, either means provided by the creditors, called liabilities. These means of finance are claims of the financers to the assets of the entity. Therefore value of the assets in any entity must be equal to the sum of equity and liabilities creating equation:
Upon analyzing business transactions all the time it is required to identify how they impact each part of this equation, i.e. how they impact assets, equity or liabilities.
Examples below provide analysis of basic types of transactions and their impact on the accounting equation.
1st transaction: One shareholder establishes a company which will provide copying services. The shareholder invests 15000$ in cash opening entity’s bank account and transferring cash into bank account. The impact of this transaction on the equation is provided below, i.e. cash (assets) increases by 15000$ and share capital (owners’ equity) increases by 15000$, as investment is made from the own means of the shareholder.
2nd transaction: the entity acquires copying equipment for 8000$ paying in cash from its’ bank account. As a result cash decreases by 8000$ and equipment increases by 8000$. Before reflecting impact of this transaction on the equation, please note, that the equation starts from the figures transferred after the previous transaction was reflected, i.e. opening cash balance is 15000$ and opening owners’ equity balance is 15000$. This is done after each transaction is reflected. Please pay attention to the tact, that in this case owners’ equity does not change, as only structure of the assets changed, i.e. cash was exchanged into equipment. After reflecting impact of this transaction on the equation, you can see that still the equation remains, i.e. cash plus equipment is equal to owners’ equity, which is a must. If there is not equation, a mistake upon reflecting transaction in the equation was made.
3rd transaction: inventory is purchased to provide services. Price of inventory is 900$, the acquisition is made on credit, i.e. the suppliers will be paid after 30 days. In this case there is a change, i.e. increase in assets (inventory) by 900$. As the entity does not pay cash at once, but remains liable for the inventory liabilities of the entity increase, i.e. accounts payable increase by 900$. Again we have an equation.
4th transaction: part of the accounts payable, i.e. 500$ was paid from bank account to suppliers of inventory. In this case liabilities of the entity changed, i.e. decreased by 500$, as cash amounting to 500$ was paid to suppliers, causing decrease in assets.
5th transaction: services for 6000$ were provided to customers, who paid in cash. In this case the company receives income for the services provided. Income represents increase in owners’ equity, as belong to the owners. Therefore as a result of this transaction owners’ equity increases by 6000$ and assets (cash) increase by 6000$, as customers pay in cash.
6th transaction: inventory used in providing services (220$) was expensed, i.e. written off. In this case the company incurred expenses. Contrary to income, which cause increase in owners’ equity, expenses cause decrease in owners’ equity. Therefore there is a decrease in inventory by 220$ and decrease in owners’ equity by 220$. Once again after reflecting this transaction, there is an equation, i.e. assets are equal to the sum of liabilities and owners’ equity.
In order to have a full picture of all the transactions, below you can find a summary. The summary shows an impact of each transaction on the accounting equation and at the end there is a balance, i.e. equation between assets and sum of liabilities and owners’ equity.
Further through all the learning process the below company Alfa will be used. The following transactions occurred during June 2007 in Alfa:
1. June 1, 2007: Alfa was established to provide copying services and sell stationery. Invest own capital in cash to Alfa’s bank account – 20000$ and take a loan from bank in the name of Alfa – 15000$
2. June 2, 2007: Alfa acquired fixed assets: equipment – 19000$; current assets: inventory (for sale) – 5000$. Part of total acquisition price was paid in cash – 4000$, remaining part to be paid after 30 days
3. June 3, 2007: Alfa acquired office supplies for cash – 2500$
4. June 4, 2007: Alfa paid in cash for office insurance, insurance period - one year, insurance price 1500$
5. June 5, 2007: Alfa partly paid suppliers by cash for the equipment and inventory acquired on June 2, 2007 – 3500$
6. June 15, 2007: Alfa provided copying services to customers – 7000$. 5000$ was received in cash, remaining part to be received after 30 days
7. June 19, 2007: Alfa sold all stationery for cash, sales price – 6500$
8. June 21, 2007: customers, which acquired copying services on June 15, 2007 paid partly their debt – 1000$
Below there is an analysis how these transactions impact accounting equation.
1st transaction results increase in cash by 35000$ (investment of shareholder and loan from bank), increase in owners’ equity (share capital) by 20000$ and increase in liabilities (loan) by 15000$.
2nd transaction results increase in equipment by 19000$, inventory 5000$, decrease in cash by 4000$, as part of the acquisition price was paid in cash, and increase in liabilities (accounts payable) by 20000$ (total acquisition price 19000$+5000$ minus paid in cash 4000$).
3rd transaction results increase in office supplies (assets) by 2500$ and decrease in cash by 2500$ as all acquisition price of inventory was paid in cash.
4th transaction results increase in prepaid expenses (insurance) by 1500$. Prepaid expenses represent certain expenses which are paid in advance and which will be incurred in the future. As an example is insurance, for which Alfa pays in advance for the whole year and these expenses actually will be incurred through the whole year. As Alfa paid for insurance in cash, there is a decrease in cash by 1500$.
5th transaction results decrease in cash by 3500$ and decrease in liabilities (accounts payable) by 3500$, as Alfa paid part of its debt to the suppliers of the equipment and inventory acquired on June 2, 2007.
6th transaction results increase in owners’ equity by 7000$, as Alfa earned income, which as it was mentioned above belong to the owners. Also there is an increase in cash by 5000$ and in accounts receivable, i.e. debt from customers, by 2000$ (income 7000$ minus cash received 5000$). Debt from customers or accounts receivable represent assets, as these are rights of Alfa to claim cash from customers for the services provided.
7th transaction. This transaction is more complicated that the above described, as there are two parts:
1. sale of inventory represent income amounting to 6500$, which results increase in cash by 6500$ and increase in owners’ equity by 6500$.
2. Alfa sold all inventory, which was acquired on June 2, 2007. The cost of this inventory amounting to 5000$ represents expenses incurred, i.e. cost of inventory sold. Therefore upon selling this inventory there is a decrease in inventory by 5000$ and decrease in owners’ equity by 5000$. In total Alfa earned 1500$ profit on the sale of inventory, i.e. sales income (6500$) minus cost (5000$)
8th transaction results increase in cash by 1000$ and decrease in account receivable, i.e. debt from customers decreases as they paid their debt.
And finally there is a summary of all the transactions, indicating that at the end there is an equation between assets and sum of liabilities and owners’ equity.
Want more information on accounting? Learn Accounting Basic with our Accounting Basic Video Course. This is a detailed video course, helping you to learn accounting in a step by step manner. You can watch videos, you can print out video presentations, in this course you will find plenty of explanations and practical examples, which are essential to learn and understand accounting. |
NATURAL NEST CHARACTERISTICS OF APIS MELLIFERA JEMENITICA (HYMENOPTERA; APIDAE) AND ITS IMPLICATIONS IN FRAME HIVE ADOPTION.
Apis mellefera jemenitica is the smallest race of A. mellifera both in its body and colony sizes. In the current study we assessed the natural nest volume, workers brood cell dimensions and bee space of the race through measuring their dimensions from naturally built combs in log hives. The optimum box hive volume and surface area requirement were assessed by keeping colonies at different volumes of frame hives with four replications each and monitored for a period of one year. The average occupied nest volume and comb surface area of the race in log hives were 12.28 5.98 l and 8017.2 3110.60 cm2 respectively which are significantly smaller than other A. mellifera races. The worker brood cells width and depth of the race were 4.07 0.17 mm and 9.39 0.42 mm respectively and the race builds an average of 262.5 more worker brood cells/dm2 than is built on embossed foundation sheets.
The race maintains an average of 7.27 1.35 mm bee space and naturally builds 30% more combs per unit length than other races. Based on the performances of colonies, box hives with seven standard frames were found to be the optimum for the race in the region. The study indicates the importance of designing box hives and accessories that match with the natural nest volumes, their body and colony sizes which may contribute to enhance the productivity of the race.
Key words: A.m. jemenitica, nest-volume, bee-space, brood cell dimensions.
Apis mellifera jemenitica is autochthonous to a large area of the Arabian Peninsula (Saudi Arabia, Yemen and Oman) and it also occurs in vast areas of Africa in the Sahel climatic zone (Ruttner, 1988; Hepburn and Radloff, 1998). The bees are reported as the smallest honey bee race of Apis mellifera that overlaps with Apis cerana for many of its morphological values (Ruttner, 1988). Moreover, Al-Ghamdi et al. (2013) reported that in A. m. jemenitica of the Arabian Peninsula some of the morphological characters related to their body size are smaller than those of the African populations. The race is well adapted to the hot and dry environmental conditions of the region, not only because of its smaller body size but also through maintaining small colony sizes.
Honey bee colony nest space, volume and colony size are reported as important factors in determining wax production, comb construction and subsequent colony performance and survival (Szabo, 1977; Wright, 2003; Hepburn et al., 2014). Moreover, one of the criteria by which honey bee naturally select their nest sites is mainly based on nest cavity volume (Seeley and Morse, 1976; Schmidt and Hureley, 1995; Villa, 2004).
Under natural conditions within A. mellifera; nest volumes vary greatly from race to race and ecology to ecology (Prange and Nelson, 2007; Phiancharoen et al., 2011). Moreover, honey bee colonies' energy requirements, nest defense, labor and homeostasis conditions are known as the most important factors in determining the upper limit of their nest volume (Prange and Nelson, 2007) indicating that nest volume is an essential element in colony performance and survival.
In this regard Villa (2004) reported a preference for smaller cavities by honey bee in Louisiana, USA. Moreover, unlike A. mellifera races of the temperate zone, Morse et al. (1993) observed 10.2 - 13.2 l nest boxes naturally occupied by some A. mellifera colonies for over five years. The presence of different nest volume preferences among different races was also well covered by Schmidt and Hurley (1995). In general, tropical honey bee colonies which do not require over-wintering survival food storage are reported to prefer lesser nest volume (Prange and Nelson, 2007).
The nest volumes A. mellifera have been estimated to vary between 30 - 60 l (Seeley and Morse, 1976). The African A. m.scutellata are reported to require a relatively smaller area with an average nest volume of 20 l, which is only about half of that of the European subspecies (Johannsmeier, 1979; Hepburn and Radloff, 1998). Moreover, 10 - 20 l nest volume was recorded for natural nest cavities of Africanized bee colonies in Mexico (Ratnieks and Piery, 1991). This generally indicates the importance of careful consideration of the natural nest volume condition of colonies of a given race or region before adopting a box hive of a certain volume. In this regard, Wright (2003) emphasized the importance of maintaining balances between the volume of hives used with the population size of colonies, their population dynamics and amount of stored food at different seasons.
Moreover, Akratanakul (1990) reported that hive volumes which are not proportional to the colony size are difficult for them to defend their enemies and to properly control their nest microclimate.
Besides nest volume, bee spaces and the dimensions of brood cells also vary among races. Worker brood cell diameters of A. mellifera races vary greatly and an average of 4.84 mm was recorded for Africanized bees (Piccirillo and De Jong, 2003) and for that of European A. mellifera races reported as 5.2 mm (Seeley and Morse, 1976). Seeley and Morse (1976) reported that the average depth of A. mellifera species worker brood cells is 11 mm; this also varies among races (Phiancharoen et al., 2011). Moreover, the presence of variations in bee space among the different Apis mellifera races were well reported (FAO, 1986; Crane, 1990). Generally, information on nest volume limits, bee spaces and dimensions of brood cells are important factors in developing and adopting movable frame hives suitable to the biology and ecology of any honeybee race.
However, in many tropical and subtropical countries, it is common to directly adopt movable frame hives and accessories that have been designed for temperate evolved races which might affect the performances of colonies and the acceptance of the technology by beekeepers. Similarly, in the Arabian Peninsula, the types of box hives and their accessories used are the ones designed for European races. Beekeepers in the region strongly argue that movable box hives that have been designed for European races may not be suitable to local bees and conditions. As a result, despite the longstanding and extensive beekeeping practices in the region, the adoption of box hives to manage A. m. jemenitica colonies is very low; indeed more than 70% of the local colonies are still kept in log hives (Al-Ghamid and Nuru, 2013; Nuru et al., 2014).
The low adoption of box hives in the region could be due to the lack of consideration for the biology and ecology of the local bee race when adopting box hives and their accessories. In this regard, tangible information on the natural nest characteristics like the optimum nest volume, worker brood cell dimensions and bee space of A. m. jemenitica of the Arabian Peninsula are lacking. Since the success in beekeeping is the result of basic knowledge of the biology of honey bee (Hepburn and Radloff, 1998), it would be of paramount importance to understand the nest characteristics of the race. Hence, the aim of the current study was to assess the natural nest volumes, bee spaces and the brood cells dimensions of A. m. jemenitica and to compare with other races. Moreover, the study was focused to determine the optimum box hive volume requirement of the race and to come up with possible recommendations for designing and adopting movable frame hives specifically suitable for A. m. jemenitica of the target region.
MATERIALS AND METHODS
Nest volume and comb surface area: Because wild colonies and their nests are not commonly available in the region, the natural nest volumes of A. m. jemenitica were measured from traditional, cylindrical log hives that are widely used in the region. The average volume of log hives was determined through measuring randomly selected 180 log hives occupied by bees. The average nest volume utilized by colonies was obtained indirectly through measuring the unoccupied parts of the hives, in both the front and rear ends, and then deducting this from the whole length of the hive. The average comb surface area of the race was estimated based on surface areas of combs built in 111 different log hives. For each hive the comb surface areas were calculated by taking the average comb radius then multiplyingby the number of combs built by the colonies. The numbers of combs were counted after the colonies were transferred from log hives into box hives for other research purposes (Fig.1).
Number of worker cells per unit area: The average numbers of worker cells per unit area were determined as the number of worker brood cells/dm2 in built combs. For this, naturally built worker combs were taken from 20 different colonies and from each colony, combs with dm2 area with three replications were marked and counted. The results were compared with the number of cells built on embossed foundation sheets developed for European honey bee races.
Comb thickness: Brood comb thickness was determined by measuring the thickness of worker brood combs that had been used for brood rearing. Brood combs were obtained from 20 different colonies; for each colony 15 and a total of 300 measurements were taken using digital caliber.
Worker brood cells depth: Brood combs were obtained from 20 different colonies from which the average depth of worker brood cells was determined. To easily measure the depth, rows of cells in the combs were cross- sectioned using a warm sharp paper knife. For each colony the depth of 25 worker brood cells a total of 500 cells were measured.
Worker brood cell width: The width of worker brood cells (inner wall to wall distance) was determined from 20 different colonies by measuring cross-sectioned 25 cells from each colony, a total of 500 cell width measurements.
Midrib to midrib (comb spacing): Comb spacing of A. m. jemenitica was measured as the midrib to midrib distance of two adjacent combs built in log hives. For this, measurements were taken from 10 different colonies and 10 midrib to midrib distances were measured for each colony. The average midrib to midrib distances was also calculated from the number of combs built and the average spaces occupied by the combs on given length in the log hives. For this determination, a total of 1634 combs with their occupied length in 111 log hives were used. Finally, the average number of naturally built combs in log hives on a 40 cm length (which is equivalent to the space used to keep 10 frames in box hives), were counted from 85 hives and compared with the numbers of frames in box hives.
Bee space: The average natural bee space of A. m. jemenitica was measured as the distance between two adjacent opposite brood combs built in log hives. Accordingly, a total of 10 log hives with well-drawn brood combs were used. For each log hive, 10 bee spaces, a total of 100 measurements at different points were measured.
Box hive volume requirement of the race: In addition to log hive volume estimation, the box hive volume requirement of the race was assessed by keeping colonies at different volumes of box hives. For this, four levels of movable frame box hives (with 5, 6, 7, and 8 (standard) frames) with four replications for each hive type were prepared. Then A. m. jemenitica colonies were transferred and their performances monitored for a period of one year. For each hive type, shallow supers were prepared in case the colonies require extra space. Data on the number of available frames, number of frames utilized, areas occupied by adult bees, brood, pollen and nectar were recorded every 21 days following Jeffree (1958) protocols, and comparisons were made between the different hive volumes.
Statistical analysis:The data were analyzed using both descriptive statistics and one way ANOVA procedures to compare means. Mean separation was based on Tukey- Kramer HSD. For this JMP-5 statistical software (SAS, 2002) was employed at 95% (alpha = 0.05) level of significance.
Nest volume and comb surface area: The average volume of traditional log hives was 29.92 4.10 l with the range of 13.85 - 40.60 l (N = 180) (Table 1). The average occupied volume of A. m. jemenitica colonies in these log hives was only 12.28 5.98 l, with the range of 3.05 - 29.5 l (N = 180) so that the majority (87.22%) of occupied volumes were less than 20 l (Fig. 2). The average number of combs built per colony were 14.72 5.71 with range of 5 to 28 combs/colony (N = 111) (Table 1). The number of naturally built combs in 40 cm space in log hive (Fig. 3A) varied from 10.76 - 14.36 with a mean of 13.16 0.74 combs (N = 85). The radius of combs in log hives varied between 6.67 cm and 10.67 cm with a mean of 9.39 0.58 cm (N = 111).
Considering the average radius (r = 9.39 cm) of circular combs and the average number of combs in the log hives, the average surface area of A. m. jemenitica colonies combs was calculated to be 8017.2 3110.60 cm2 with a range of 2723.10 cm2 - 15249.36 cm2 (N = 111).
Number of worker brood cells/dm2: The average number of naturally built A.m. jemenitica worker's broods cells/dm2 (both sides) was 1124.5 54.94 with a range of 1000 - 1228 cells, (N = 60). Variation in the number of worker brood cells was not significantly different among colonies (F = 1.267, df = (19.40), p = 0.258).
Cell depth: The average depth of natural worker brood cells was 9.39 0.42 mm with a range of 8.50 - 10.20 mm, (N = 500), the variation among colonies was significantly different (F = 3.23, df = (19,480), p less than 0.001).
Table 1. Average, standard deviation, minimum and maximum values of nest volume (in liter), comb surface area and worker brood cells dimensions of sampled colonies
Variables###N###Mean S.D.###Min.###Max.###Values for other races
Volume of log hive in liters###180###29.92 4.1###13.85###40.6###25a
occupied volume in liters###180###12.28 5.98###3.05###29.52###30-60b
Number of combs built/colony###111###14.72 5.71###5###28
Comb radius in cm###111###9.39 0.58###6.67###10.67
Comb surface area in cm2/colony###111###8017.2 3110.6###2723.1###15249.36###6000g, 23400b
Number of combs in 40 cm space###85###13.17 0.74###10.76###14.36###10
Brood comb thickness in mm###300###19.96 0.87###17.95###21.9###21-24b; 25c
Number of worker brood cells/dm2###60###1124.5 54.94###1000###1228###857d
Worker brood cell depth in mm###500###9.39 0.42###8.5###10.2###11b
Worker brood cell width in mm###500###4.07 0.17###4###4.8###5.15-5.25e
Midrib to midrib distance in cm###100###2.98 0.26###2.40###3.5###2.79 3.3f
Bee space###100###7.27 1.35###5###10###8-9
Cell width (diameter): The average width of worker brood cell was 4.07 0.17 mm with a range of 4.0 - 4.80 mm (N = 500), and the average diameter of worker brood cells varied significantly among colonies (F = 4.85, df = (19,480), p less than 0.001).
Workers brood comb thicknesses: The average thickness of A. m. jemenitica workers brood combs was 19.96 0.87 mm with a range of 17.95 - 21.9 mm (N = 300). The variation in brood comb thickness was significantly different among colonies (F = 15.61, df = (19,280), p less than 0.001).
Workers brood combs midrib to midrib distance (comb spacing): The average midrib to midrib distance of A. m. jemenitica workers brood combs was 2.98 0.26 cm with a range of 2.4 - 3.5 cm (N = 100); a result that was significantly different among colonies (F = 4.13, d = (9,90), p less than 0.001).
Bee space: The average natural bee space of the A. m. jemenitica was 7.27 1.35 mm with a range of 5 to 10 mm, which is relatively smaller than bee spaces of other A. mellifera races. Variations were not significantly different among colonies (F = 0.81, df = (9,90), p less than 0.61).
Box hive volumes occupied by colonies: Colonies kept in box hives of different volumes (5, 6, 7 and 8 frames) and their responses are shown in Table 2. The average number of occupied frames, adult bee and brood populations, stored nectar and pollen were observed to increase slightly as the volumes of box hives increased from five to seven frames (Table 2). However, these values were observed to decline as the number of frames increased from seven to eight, but were not statistically significantly different (Table 2). Generally, colonies kept in 7 frame hives recorded relatively better values than others. All colonies kept in hives of different volumes did not require additional supers even during peak flowering periods.
Table 2. Performances of colonies in box hives of different volumes.
###% of frames###Average
###Adult bee###Brood unit###Nectar###Pollen unit
###utilized Vs###number of
Hive volume###N###unit area###area###unit area###area
###Mean SE###Mean SE###Mean SE###Mean SE###Mean SE###Mean SE
Five frames###37###47.3 3.5a###2.8 0.3c###177.3 20.0c###57.3 11.6b###50.3 8.4b###21.7 3.2a
Six frames###82###47.7 2.4a###3.2 0.2bc###203.7 13.4bc###63.1 7.8b###60.9 5.6b###19.5 2.2a
Seven frames###53###53.9 2.9a###4.3 0.3a###278.3 16.7a###103.9 9.7a###89.5 7.0a###26.8 2.7a
Eight frames###67###45.4 2.6a###4.0 0.2ab###255.3 14.9ab###98.1 8.6a###74.1 6.2ab###23.7 2.4a
P-value###0.177###less than 0.00###less than 0.00###less than 0.00 less than 0.00###0.189
The average nest volume (12.28 5.98 l) of A.m. jemenitica of Asian populations was generally much smaller than those volumes (30 - 60 l) reported for other A. mellifera races (Seeley and Morse, 1976). Moreover, the nest volume of the population overlaps with the average nest cavity volume of A. cerana which is usually about 10 - 15 l (Inoue and Adri, 1990; Phiancharoen et al., 2011). This could be due to their less energy requirements to homeostasis their nests since the population exists in warm climatic zone. Moreover, the less volume requirement of the colonies could be associated with the absence of storing a large food resource for their over-wintering as that of European evolved bees. In this regard the upper limit of nest volume has been reported to be influenced by nest defense, labor and homeostasis conditions and the over- wintering survival strategy of a colony (Michener, 1974; Prange and Nelson, 2007).
Indeed, the nest volume of colonies may also be influenced by the quality and quant ity of available forage resources in different seasons and their population dynamics; in that the long dry seasons and associated shortages of bee forage may also have influenced the population to adapt by limiting their colony sizes and nest volumes to the minimum to avoid risks associated with long dearth periods.
The optimum volume requirement of A. m. jemenitica population of the study area was also well defined in movable frame hives; in that colonies kept with more than seven frames hives showed a declining trend in general performances such as: adult bees and brood populations, stored nectar and pollen (Table 2). This agrees with the findings of Akratanakul (1990) and Wright (2003) who reported the deleterious effect of managing colonies with hive volumes which are not proportional to the population size of a colony during different seasons. Based on the performances of colonies and values recorded in this study, the optimum hive for A. m. jemenitica of the target area would be a 7 frames box hive with 30.5l volume and 11900 cm2 comb surface area which both closely match with the average log hive volume (29.92 4.1 l) and average natural comb surface area (8017.2 3110.6 cm2) recorded for the race in the region.
Using of half to two thirds of the size of Langstroth hives for tropical races was also previously recommended (Akratanakul, 1990). In this regard, as beekeepers became aware that the sizes of their colonies are steadily declining, even they started to use log hives with smaller volumes, abandoning the larger (> 40 l) volume log hives, which were appropriate for beekeeping during the past 30 - 40 years or so when there were more plants with relatively longer flowering periods (Mr. Abdla, pers. comm.). Such trends may indicate the decline in vegetation coverage and possible climate changes in the region.
A. m. jemenitica population of the study area observed to build relatively smaller sized workers brood cells, with an average width of 4.07 0.17 mm and range of 4.0 - 4.8 mm, which is smaller than those reported for other A. mellifera races: Africanized bees (4.84 mm) (Piccirillo and De Jong, 2003); European A. mellifera races (5.2 mm) (Seeley and Morse, 1976).
Interestingly, the average workers brood cell width of A. m. jemenitica overlaps with the Apis cerana's workers brood cell diameter of 4.2 - 4.8 mm (Ruttner, 1988; Inoue and Adri, 1990; Phiancharoen et al., 2011). This indicates that A. m. jemenitica may share many morphological and biological traits with Apis cerana. The overlapping of morphological characters of A. m. jemenitica with Apis cerana was well documented (Ruttner, 1988). In addition, the more close phylogenetic relationship of Apis cerana with Apis mellifera than other Apis species was determined using the mitochondrial DNA sequence data (Garnery et al., 1991). From its zoogeographical position and having overlapping morphological characters, A. m. jemenitica could be a link point between the two Apis species and may strengthen the previous findings.
Moreover, A. m. jemenitica not only utilize relatively smaller nest volumes but also construct smaller comb surface areas (8017.2 3110.60 cm2) compared to the average worker comb surface area of 23,400 cm2 recorded for European evolved feral colonies (Seeley and Morse, 1976) but relatively more than the surface area (6000 cm2) recorded for A. m. scutellata (McNally and Schneider, 1996). Generally, the average comb surface area of the population was much less than the comb surface area of a standard 10 frame Langstroth hive (brood chamber) which is 10 frames (42 cm x 19.5 cm) x 2 (both sides) = 13,960 cm2.
However, the density of worker brood cells is relatively higher in that A. m. jemenitica are observed to build relatively more worker cells (1124.5 54.94 cells/dm2) than reported for African Apis mellifera (1022 cells/dm2) in traditional hives (Hepburn, 1983). Generally, A. m. jemenitica population naturally builds 262.5 more worker brood cells/dm2 than the number of cells built on embossed foundation sheets designed for other races. The result agrees with the finding of Al- Ghamdi (2005) who reported 25% more cells/dm2 for A. m. jemenitica than if they were given embossed European wax foundation sheets on which to build comb. For A. m. jemenitica to build a total number of worker brood cells equal to that of 10 frames with embossed wax foundation sheets may only require about 7.67 frames to build cells in their own preferred size.
Generally A. m. jemenitica are able to compensate the relatively smaller nest volume and comb surface area with relatively high density of worker brood cells per unit area, indicating that the race is more space efficient compared to other Apis mellifera races. In this regard, considering the natural worker cell dimensions and from a practical beekeeping point of view, it would be worth to use casting molds with a suitable cell size that matches with their natural cell size, which would help to efficiently rear more broods per unit area. In this regard, FAO (1986) reported the presence of 25% worker cell sizes variations among the different races of Apis mellifera and it has been recommended to use wax foundation sheets that meet the size of local bees.
Moreover, the average depth of worker brood cells built by A. m. jemenitica (Table, 1) was not as deep as those of other A. mellifera races, 11 mm (Seeley and Morse, 1976). In addition, the average bee space (Table, 1) of the race was less than the 9-10 mm bee space reported for other A. mellifera races. The midrib to midrib distances of A. m. jemenitica (2.98 0.26 cm), which is also much smaller than 3.5 cm usually used for other European races, and more closer to A. cerana comb spacing (3.0 cm, Segeren), 2004).
Moreover, the smaller worker brood cell depth, bee space and midrib to midrib distance requirement of A. m. jemenitica can be easily realized in that, within the 40 cm space that usually used to keep 10 frames in standard box hive (Fig. 3B), A. m. jemenitica can naturally builds an average of 13 combs in traditional log hives (Fig. 3A).
Though honeybees have a tendency to tolerate a certain degree of bee space variations, however direct adopting of frames with 3.5 cm wide comb spacing for A. m. jemenitica does not seem appropriate and it could be one of the reasons for frequent observing of brace combs between two standard frames. In this regard, for Tropical African honey bee races 32 mm comb spacing with 7 mm bee spaces were recommended (Adjare, 1990). Moreover, Segeren (2004) has mentioned that the smaller the bee race, the smaller the bee space, comb spacing, cell size and their nest volume.
All these conditions may indicate the importance of consideration of the natural bee space, comb spacing and hive volume and the ecology of the race in using of movable frame hives rather than direct introducing and adopting of box hives and accessories developed for temperate evolved races with larger body size and better bee forage vegetation and their long flowering periods. Generally the current study may suggest the possible use of 7 frame hives with frame dimension of less than 32 mm comb spacing and with wax foundation sheets that meet the size of local bees. The new information and technology may induce more productivity and it can be more accepted and disseminated throughout the regions where A. m. jemenitica occur and this may contribute to improve production and productivity of beekeeping.
Conclusion: The study generally revealed that A. m. jemenitica requires relatively smaller nest volume, bee space, comb spacing and brood cells dimensions. However the race can raise significantly more brood per unit area than other Apis races. Moreover, A. m. jemenitica was observed to maintain small colony size which could be its adaptation to cope with unpredictable and harsh environmental conditions of the region.
Acknowledgments: The authors are grateful to the Deanship of Scientific Research and College of Food and Agricultural Science Research Chair, King Saud University Riyadh, for providing research support.
Adjare, S. O. (1990). Beekeeping in Africa. FAO Agricultural Services Bulletin 68/6 Food and Agriculture Organization of the United Nations Rome, 1990. http://www.fao.org/docrep/t0104e/T0104E00.htm#Contents. Accessed May, 2014.
Al-Ghamdi, A. A. (2005). Comparative study between subspecies of Apis mellifera for egg hatching and sealed brood percentage, brood nest temperature and relative humidity. Pakistan J. Biol. Sci.8(4): 626-630.
Al-Ghamdi, A. A. and A. Nuru (2013). Beekeeping in the Kingdom of Saudi Arabia, past and present practices. Bee World. 90(2):26-29.
AL-Ghamdi, A. A., A. Nuru, M. S. Khanbash and D. R. Smith (2013). Geographical distribution and population variation of Apis mellifera jemenitica (Ruttner). J. Apic. Res. 52(3):124-133.
Akratanakul, P. (1990). Beekeeping in Asia. FAO Agricultural Services Food and Agriculture Organization of the United Nations, Rome, Bulletin 68/4. www.fao.org/docrep /.../x0083e05.ht.. Accessed May, 2014.
Crane, E. (1990). Bees and beekeeping. Science, practice and world resources. Cornell University Press, Ithaca, New York, 614 p.
Erickson, E. H., D. A. Lusby, G. D. Hoffman, E. W. Lusby (1990). Speculations on foundation as a colony management tool. Bee Culture. http://www.beesource.com/point-of-view/ed-dee- lusby. Accessed 8 April 2014.d
FAO (1986). A Beekeeping Guide, Tropical and Sub-Tropical Apiculture, FAO Agricultural Services Bulletin 68, FAO, Rome, Italy. 283 p. https://archive.org/.../Tropical_and_Sub- tropical_Apicul, Accessed June, 2014.
Garnery, L., D. Vautrin, J. M. Cornuet, M. Solignac (1991). Phylogenetic relationships in the genus Apis inferred from mitochondrial DNA sequence data. Apidologie. 22:89-97.
Hepburn, H. R. (1983). Comb construction by the African honey bee Apis mellifera adansonii. J. Entomol. Soc. Southern Africa. pp 87-102.
Hepburn, H. R., L. A. Whifflermm (1991). Construction defects define pattern and method in comb building by honeybees. Apidologie. 22:381-388.
Hepburn, H. R., S. E. Radloff (1998). Honeybees of Africa.Springer-Verlag, Berlin, 370 p.
Hepburn, H. R., C. W. W. Pirk, O. Duangphakdee (2014).Honeybee Nests. Springer-Verlag, Berlin, 389 p.
Inoue, T., S. S. Adri (1990). Nest site selection and reproductive ecology of the Asian honey bee, Apis cerana indica, in central Sumatra. In: Sakagami SF, Ohgushi R, Roubik DW (eds.) Natural history of social wasps and bees in equatorial Sumatra.
Hokkaido University Press, Sapporo, pp 219-232. Jeffree, E.P. (1958). A shaped wire grid for estimating quantities of brood and pollen in combs. Bee World. 39(5):115-118.
Johannsmeier, M. F. (1979). Termite mounds as nesting sites for colonies of the African honeybee. South African Bee J. 51:9.
McNally, L., S. S. Schneider (1996). Spatial distribution and nesting biology of colonies of the African honeybee Apis mellifera scutellata (Hymenoptera: Apidae) in Botswana, Africa. Environ. Entomol. 25(3):643-652.
Michener, C. D. (1974).The Social Behavior of the Bees: A Comparative Study. Belknap Press of Harvard University Press, Cambridge, 404 p.
Morse, R. A., J. N. Layne, P. K. Visscher, F. Ratnieks (1993). Selection of nest cavity volume and entrance size by honeybees in Florida. Fla. Sci.56:163-167.
Nuru, A., A. G. Shenkute, A. A. Al-Ghamdi, S. Ismaiel, S. Al-kahtani, T. Yilma, A. Workneh (2014). Socio- economic analysis of beekeeping and determinants of box hive technology adoption in the Kingdom of Saudi Arabia. J. Anim. Plant Sci. 24(6):xx-xx.
Piccirillo, G. A., D. De Jong (2003).The influence of brood comb cell size on the reproductive behavior of the ectoparasitic mite Varroa destructor in Africanized honey bee colonies. J. Gen. mol. res. 2(1):36-42, ISSN 1676 - 5680. http://www.ncbi.nlm.nih.gov/pubmed /12917800. Accessed, December, 2013.
Phiancharoen, M., O. Duangphakdee, H. R. Hepburn (2011). Biology of nesting. In: H.R. Hepburn and S.E. Radloff (eds.) Honeybees of Asia. Springer- Verlag, Berlin, pp. 109-131.
Prange, S., D. H. Nelson (2007). Use of small-volume of nest boxes by Apis mellifera L. (European honeybees) in Albama. Southeast. Nat. 6(2):370375. DOI: http://dx.doi.org/10.1656/15287092(2007)6[370:UOSNBB]2.0.CO;2.
Ratnieks, F. L. W., M. A. Piery (1991). The natural nest and nest density of the Africanized honeybee (Hymenoptera: Apidae) near Tapachula, Chiapas, Mexico. Can. Entomol. 123:353-359.
Ruttner, F. (1988). Biogeography and Taxonomy of Honeybees. Springer-Verlag, Berlin, 284 p.
SAS (2002). SAS Institute Inc., JMP-5 Statistical Software, Version 5. Cary, NC, USA.
Schmidt, J.O., R. Hurley (1995). Selection of nest cavities by Africanized and European honeybees. Apidologie.26(6):467-475. DOI: http://dx.doi.org/10.1051/apido:19950603.
Seeley, T. D., R. A. Morse (1976). The nest volume of the honeybee (Apis mellifera) L. Insectes Sociaux. 23(4):495-512.
Segeren, P. (2004). Beekeeping in Tropics, Agromisa Foundation, Digigrafi, Wageningen, the Netherlands, 90 p. http:// www. journeytoforever. org/farm_library/AD32.pdf. Accessed June, 2014.
Szabo, T. I. (1977). Effect of colony size and ambient temperature oncomb building and sugar consumption. J. Apic. Res. 16:174-183.
Villa, J. D. (2004).Swarming behavior of the honeybee (Hymenoptera: Apidae) in Southeastern Louisiana. Ann. Entomol. Soc. Am. 97:111-116.
Wright, W. (2003). Survival traits of European honeybees. Bee Culture. 143. wwweap.gov.et/.../ Livestock. |
Mathematics - Algebra (529 results)
The purpose of this book, as implied in the introduction, is as follows: to obtain a vital, modern scholarly course in introductory mathematics that may serve to give such careful training in quantitative thinking and expression as well-informed citizens of a democracy should possess. It is, of course, not asserted that this ideal has been attained. Our achievements are not the measure of our desires to improve the situation. There is still a very large "safety factor of dead wood" in this text. The material purposes to present such simple and significant principles of algebra, geometry, trigonometry, practical drawing, and statistics, along with a few elementary notions of other mathematical subjects, the whole involving numerous and rigorous applications of arithmetic, as the average man (more accurately the modal man) is likely to remember and to use. There is here an attempt to teach pupils things worth knowing and to discipline them rigorously in things worth doing.<br><br>The argument for a thorough reorganization need not be stated here in great detail. But it will be helpful to enumerate some of the major errors of secondary-mathematics instruction in current practice and to indicate briefly how this work attempts to improve the situation. The following serve to illustrate its purpose and program:<br><br>1. The conventional first-year algebra course is characterized by excessive formalism; and there is much drill work largely on nonessentials.
A Complete Course in AlgebraFor Academies and High Schools
The present work contains a full and complete treatment of the topics usually included in an Elementary Algebra. The author has endeavored to prepare a course sufficiently advanced for the best High Schools and Academies, and at the same time adapted to the requirements of those who are preparing for admission to college.<br><br>Particular attention has been given to the selection of examples and problems, a sufficient number of which have been given to afford ample practice in the ordinary processes of Algebra, especially in such as are most likely to be met with in the higher branches of mathematics. Problems of a character too difficult for the average student have been purposely excluded, and great care has been taken to obtain accuracy in the answers.<br><br>The author acknowledges his obligations to the elementary text-books of Todhunter and Hamblin Smith, from which much material and many of the examples and problems have been derived. He also desires to express his thanks for the assistance which he has received from experienced teachers, in the way of suggestions of practical value.
This text is prepared to meet the needs of the student who will continue his mathematics as far as the calculus, and is written in the spirit of applied mathematics. This does not imply that algebra for the engineer is a different subject from algebra for the college man or for the secondary student who is prepared to take such a course. In fact, the topics which the engineer must emphasize, such as numerical computations, checks, graphical methods, use of tables, and the solution of specific problems, are among the most vital features of the subject for any student. But important as these topics are, they do not comprise the substance of algebra, which enables it to serve as part of the foundation for future work. Rather they furnish an atmosphere in which that foundation may be well and intelligently laid.<br><br>The concise review contained in the first chapter covers the topics which have direct bearing on the work which follows. No attempt is made to repeat all of the definitions of elementary algebra. It is assumed that the student retains a certain residue from his earlier study of the subject.<br><br>The quadratic equation is treated with unusual care and thoroughness. This is done not only for the purpose of review, but because a mastery of the theory of this equation is absolutely necessary for effective work in analytic geometry and calculus. Furthermore, a student who is well grounded in this particular is in a position to appreciate the methods and results of the theory of the general equation with a minimum of effort.<br><br>The theory of equations forms the keystone of most courses in higher algebra. The chapter on this subject is developed gradually, and yet with pointed directness, in the hope that the processes which students often perform in a perfunctory manner will take on additional life and interest.
A History of Mathematics
Florian Cajori's A History of Mathematics is a seminal work in American mathematics. The book is a summary of the study of mathematics from antiquity through World War I, exploring the evolution of advanced mathematics. As the first history of mathematics published in the United States, it has an important place in the libraries of scholars and universities. A History of Mathematics is a history of mathematics, mathematicians, equations and theories; it is not a textbook, and the early chapters do not demand a thorough understanding of mathematical concepts. The book starts with the use of mathematics in antiquity, including contributions by the Babylonians, Egyptians, Greeks and Romans. The sections on the Greek schools of thought are very readable for anyone who wants to know more about Greek arithmetic and geometry. Cajori explains the advances by Indians and Arabs during the Middle Ages, explaining how those regions were the custodians of mathematics while Europe was in the intellectual dark ages. Many interesting mathematicians and their discoveries and theories are discussed, with the text becoming more technical as it moves through Modern Europe, which encompasses discussion of the Renaissance, Descartes, Newton, Euler, LaGrange and Laplace. The final section of the book covers developments in the late 19th and early 20th Centuries. Cajori describes the state of synthetic geometry, analytic geometry, algebra, analytics and applied mathematics. Readers who are not mathematicians can learn much from this book, but the advanced chapters may be easier to understand if one has background in the subject matter. Readers will want to have A History of Mathematics on their bookshelves.
On the Study and Difficulties of Mathematics
Bringing to life the joys and difficulties of mathematics this book is a must read for anyone with a love of puzzles, a head for figures or who is considering further study of mathematics. On the Study and Difficulties of Mathematics is a book written by accomplished mathematician Augustus De Morgan. Now republished by Forgotten Books, De Morgan discusses many different branches of the subject in some detail. He doesn't shy away from complexity but is always entertaining. One purpose of De Morgan's book is to serve as a guide for students of mathematics in selecting the most appropriate course of study as well as to identify the most challenging mental concepts a devoted learner will face. "No person commences the study of mathematics without soon discovering that it is of a very different nature from those to which he has been accustomed," states De Morgan in his introduction. The book is divided into chapters, each of which is devoted to a different mathematical concept. From the elementary rules of arithmetic, to the study of algebra, to geometrical reasoning, De Morgan touches on all of the concepts a math learner must master in order to find success in the field. While a brilliant mathematician in his own right, De Morgan's greatest skill may have been as a teacher. On the Study and Difficulties of Mathematics is a well written treatise that is concise in its explanations but broad in its scope while remaining interesting even for the layman. On the Study and Difficulties of Mathematics is an exceptional book. Serious students of mathematics would be wise to read De Morgan's work and will certainly be better mathematicians for it.
Francis William Newman was an emeritus professor of University College in London and an honorary fellow of Worcester College, Oxford. Considered quite the renaissance man, Newman's interests ranged wildly, from writings on philosophy, English reforms, Arabic, diet, grammar, political economy, Austrian Politics, Roman History, and math. He wrote at length on every subject he found of interest, and this book, Mathematical Tracts is a testament to his very successful career as a mathematician and his eloquence as an impassioned author. At its core, this book explores many of the basics theorems and principles behind geometry, aimed at the budding mathematician to encourage interest and educate. A wonderful beginners guide, but also an interesting read for anyone wanting to refresh their foundational knowledge in geometry, this book is an easy to understand and approachable guide to mathematics. After establishing the basics, this book goes in-depth on many geometrical concepts such as the treatment of ration between quantities incommensurable and primary ideas of the sphere and circle. Newman's vast knowledge of mathematics is put to excellent use in this text, expounding on mathematical concepts and explaining them with such clarity that regardless of prior mathematical knowledge, the reader is guaranteed to understand the concepts. Newman highlights a variety of shapes such as pyramids and cones in their geometric context and explains their mathematical significance. He also expands the reader's understanding of parallel straight lines and the infinite area of a plane angle, and ends the book with a plethora of tables and helpful mathematical examples intended to further clarify the core concepts of the text. Truly a one of a kind, Mathematical Tracts is the perfect book for anyone interested in mathematics. Whether you're an early learner or a seasoned professional, you will find new information that is communicated in such a passionate and compelling way that it is impossible not to be enthused and excited about the topic. An incredibly approachable book laden with mathematical concepts that are made both interesting and exciting by the overwhelming passion of the author, this book is highly recommended for all readers.
A Complete AlgebraFor High Schools, Academies and Normal Schools
UrOED Junior UNIVEl Preface. The object of this work is to give a complete course in all those portions of Algebra which are required in our best High School, Academies, and Normal Schools, and at the same time to meet the requirements of students preparing for admission to college. In the preparation of this work the aim has been to make the transition from Arithmetic to Algebra a natural and easy process; to illustrate and discuss each subject with clearness and sufficient fullness; and to so grade the exercises that the beginner will take up each new topic with increased pleasure and profit, and feel that he is both gaining power and mastering the subject. Great care has been taken in the selection of examples neither to make them too difficult, and thus discourage the pupil, nor too easy, and thus deprive him of the power that comes from patient effort. All such problems as merely consume time and do not develop power have been carefully omitted, and yet a sufficient number of well-graded examples will be found under each subject to fix it permanently in the mind of the student.
A Treatise on Refrigerating and Ice-Making Machinery
A Treatise on Refrigerating and Ice-Making Machinery is published and printed by the International Correspondence Schools. This publication is essentially a course packet including a series of informational texts intended to educate the reader on the subject of refrigeration and ice-making machinery at the turn of the twentieth century. With that in mind, the book does not only address mechanics and engineering, it also highlights information on mathematics, chemistry and mechanical drawing. This is a great starter book for anyone interested on the topics of refrigeration and twentieth century mechanical engineering. The text contains a vast array of mechanical information and includes extra content on pneumatics, heat, steam, and steam engines. The book also contains a series of questionnaires and quizzes (along with the answers) for the reader to test themselves on their knowledge of the subjects presented within the text. This book covers such a bevy of practical information that an individual could feasibly utilize it to learn elementary algebra and trigonometric functions, arithmetic, logarithms, and elementary mechanics. The book also contains a full list of definitions for the topics explored within the text and a complete index for the reader to reference when looking for specific topic to explore. A Treatise on Refrigerating and Ice-Making Machinery is an engaging and informative exploration of mechanical principles and engineering education. This book is an educational read for anyone interested in elementary studies or more complex engineering and physics topics. This publication establishes a good basic level of understanding that can be expanded upon in books with more advanced content.
An Elementary Treatise on Algebra, for the Use of Students in High Schools and Colleges
The author of this treatise has endeavored to prepare a work which should sufficiently exercise the ability of most learners, without becoming, at the same time, repulsive to them by being excessively abstract. Some writers err in expecting too much, and others err, in an equal degree, by requiring too little of the student. What success has attended an attempt to attain a proper medium it is left for competent teachers to decide.<br><br>This work commences in the inductive manner, because that mode is most attractive to beginners. As the learner advances, and acquires strength to grapple with it, he meets with the more rigorous kind of demonstration. This course seems the most natural and effective. Induction is excellent in its place ; but when an attempt is made to carry it into all the departments of an exact science, the result often shows, that the main object of study was misapprehended. The young frequently fail to deduce clearly the general principle from the particular instances which have engaged their attention.<br><br>Several parts of algebra, which are either omitted or not explained with sufficient distinctness in other works, have received particular attention in this. These parts treat of principles and operations, with which students rarely become familiar, but which are essential to a clear comprehension of the subject. Among these operations may be mentioned the separation of quantities into factors, finding the divisors of quantities, and the substitution of numbers in algebraic so mulæ.
The orientalists who exploited Indian history and literature about a century ago were not always perfect in their methods of investigation and consequently promulgated many errors. Gradually, however, sounder methods have obtained and we are now able to see the facts in more correct perspective. In particular the early chronology has been largely revised and the revision in some instances has important bearings on the history of mathematics and allied subjects. According to orthodox Hindu tradition the Surya Siddhanta, the most important Indian astronomical work, was composed over two million years ago! Bailly, towards the end of the eighteenth century, considered that Indian astronomy had been founded on accurate observations made thousands of years before the Christian era. Laplace, basing his arguments on figures given by Bailly considered that some 3,000 years B. C. the Indian astronomers had recorded actual observations of the planets correct to one second; Playfair eloquently supported Bailly's views; Sir William Jones argued that correct observations must have been made at least as early as 1181 B. C.; and so on; but with the researches of Colebrooke, Whitney, Weber, Thibaut, and others more correct views were introduced and it was proved that the records used by Bailly were quite modern and that the actual period of the composition of the original Surya Siddhanta was not earliar than A. D. 400.<br><br>It may, indeed, be generally stated that the tendency of the early orientalists was towards antedating and this tendency is exhibited in discussions connected with two notable works, the Sulvasutras and the Bakhshali arithmetic, the dates of which are not even yet definitely fixed.
Mathematics for Engineers
The Directly-Useful Technical Series requires a few words by of introduction. Technical books of the past have arranged themselves largely under two sections: the Theoretical and the Practical. Theoretical books have been written more for the training of college students than for the supply of information to men in practice, and have been greatly filled with problems of an academic character. Practical books have often sought the other extreme, omitting the scientific basis upon which all good practice is built, whether discernible or not. The present series is intended to occupy a midway position. The information, the problems and the exercises are to be of a directly-useful character, but must at the same time be wedded to that proper amount of scientific explanation which alone will satisfy the inquiring mind. We shall thus appeal to all technical people throughout the land, either students or those in actual practice.
This small volume contains what remains of the course in Algebra, after matriculation, to the students in the Colleges of Civil Engineering, Mines, and Mechanic Arts in the University of California. It is intended as a continuation of the excellent work on algebra by Mr. John B.Clarke, of the Mathematical Department of the University; and it is thought it will, in connection with Clarkes Algebra, or with any work of similar scope, furnish a good and sufficient preparation for those who intend to pursue the higher mathematics. The constant aim and endeavor throughout has been so to present the various topics discussed as to render them easy of comprehension by the undergraduate student. Wm. T.Welcker. Berkeley, Caxifornia, July, 1880.
Algebra for BeginnersWith Numerous Examples
Isaac Todhunter's Algebra for Beginners: With Numerous Examples is a mathematics textbook intended for the neophyte, an excellent addition to the library of math instructionals for beginners. Todhunter's textbook has been divided into 44 chapters. Early chapters highlight the most basic principles of mathematics, including sections on the principal signs, brackets, addition, subtraction, multiplication, division, and other topics that form the foundation of algebra. Simple equations make up the large majority of the material covered in this textbook. Later chapters do introduce quadratics, as well as other more advanced subjects such as arithmetical progression and scales of notation. It is important to note that Todhunter sticks very much to the basics of algebra. The content of this book lives up to its title, as this is very much mathematics for beginners. The content is provided in an easy to follow manner. This book could thus be used for independent learning as well as by a teacher. A great deal of focus has clearly been given to providing examples. Each concept is accompanied by numerous sample questions, with answers provided in the final chapter of the book. The example questions are every bit as important as the explanations, as one cannot begin to grasp mathematical concepts without having the opportunity to put them into practice. The basics of algebra are explained in an easy to follow manner, and the examples provided are clear and help to expand the knowledge of the learner. If given a chance, Isaac Todhunter's Algebra for Beginners: With Numerous Examples can be a valuable addition to your library of mathematics textbooks.
Lessons Higher Algebra With an Appendix on the Nature of Mathematical Reasoning
Mathematics will ever remain the most perfect type of the Deductive Method in general; and the applications of mathematics to the deductive branches of physics furnish the only school in which philosophers can effectually learn the most difficult and important portion of their art, the employment of the laws of simpler phenomena for explaining and predicting those of the more complex. These grounds are quite sufficient for deeming mathematical training an indispensable basis of real scientific education, and regarding (according to the dictum which an old but unauthentic tradition ascribes to Plato) one who is dycwfUrprfros, as wanting in one of the most essential qualifications for the successful cultivation of the higher branches of philosophy. John Stuart Mill: System of Logic, Je me plaisais surtout aux mathmatiques, cause de la certitude et de PEvidence de leurs raisons; mais je ne remarquais point encore leur vrai usage, et, pensant qu elles ne servaient qu aux arts m6caniques, jem etonnais de ce que, leur fondements 6 tant si fermes etsi solides, on navait rien bti dessus de plus relev 6. Descartes: Discours sur la Mithode, 1687.
Lessons on Higher AlgebraWith an Appendix on the Nature of Mathematical Reasoning
Mathematics will ever remain the most perfect type of the Deductive Method in general; and the applications of mathematics to the deductive branches of physics furnish the only school in which philosophers can effectually learn the most difficult and important portion of their art, the employment of the laws of simpler phenomena for explaining and predicting those of the more complex. These grounds are quite sufficient for deeming mathematical training an indispensable basis of real scientific education, and regarding (according to the dictum which an old but unauthentic tradition ascribes to Plato) one who is dyeujJrpriTos, as wanting in one of the most essential qualifications for the successful cultivation of the higher branches of philosophy. John Stuart Mill: System of Logic. Je me plaisais surtout aux mathtoatiques, kcause de la certitude et de PEvidence de leurs raisons; mais je ne remarquais point encore leur vrai usage, et, pensant qu elles ne servaient qu aux arts mcaniques, je metonnais de ce que, leur fondements Stant si fermes etsi solides, on navait rien bSti dessus de plus relev 6. Descartes: Discours sur la Methode, 1637.
Elementary Algebra for Schools
The present work is an attempt to supply a want wliich we have long felt ourselves, and which we believe to be Zshared by many experienced teachers. vIn setting before a beginner the real and perplexing difficulties of elementary Algebra, there is some fear lest first lessons should degenerate into a mere mechanical manipulation of symbols, uninteresting and uninstructive, because little understood. This well known danger led us to devote special thought to the question of order; to consider, in short, what succession of the various parts of the subject would best illustrate its bearings at an early stage; and we have finally adopted an arrangement, which if it varies somewhat from the common use of elementary text-books is at least based upon the experience of many years, and embodies the result of frequent consultation with our colleagues and other teachers.
Mathematics for EngineersIncluding Elementary and Higher Algebra, Mensuration and Graphs, and Plane Trigonometry
The Directly Useful Technical Series requires a few words by way of introduction. Technical books of the past have arranged themselves largely under two sections: the theoretical and the practical. Theoretical books have been written more for the training of college students than for the supply of information to men in practice, and have been greatly filled with problems of an academic character. Practical books have often sought the other extreme, omitting the scientific basis upon which all good practice is built, whether discernible or not. The present series is intended to occupy a midway position. The information, the problems, and the exercises are to be of a directly useful character, but must at the same time be wedded to that proper amount of scientific explanation which alone will satisfy the inquiring mind. We shall thus appeal to all technical people throughout the land, either students or those in actual practice.
This work is designed as a text-book in universities, colleges, and technical schools, the first fifteen chapters being also adapted to use in high schools and academies by students who have some knowledge of elementary algebra.<br><br>The demonstrations constitute one of the characteristic features of the book. While most of our text-books on Algebra state with great clearness the theorems and rules, few of them, especially in the earlier parts, give the demonstrations in a way that enables a student to reproduce them. Usually illustration, explanation, and general demonstration are so intermingled that the student is not able to gather up and give in logical form just what constitutes the proof. In this work the plan is that which gives so much definiteness to our teaching in Geometry: each general principle is followed by a concise, logical demonstration, containing only the reasoning necessary to establish it, while all illustrations and explanations by special cases are given in separate articles. The student thus soon learns to know what is demanded in a general proof, and to distinguish between rigorous demonstration and verification or illustration by a special case. Without any loss of conclusiveness in reasoning, the methods employed have permitted, in many cases, much shorter and more easily followed demonstrations than those usually given.<br><br>Another characteristic feature is the substitution of short processes for many of the long and tedious ones in common use. As mathematical operations, at best, involve much drudgery, all practical means of shortening the work should be made available to the student. The few short processes given in our text-books are reserved until the student has formed a habit of using the long processes, and, consequently, he never gains a practical use of even these few.
Algebra for Beginners
Algebra for Beginners was written by H. S. Hall in 1900. This is a 205 page book, containing 53247 words and 2 pictures. Search Inside is enabled for this title.
Teacher's Manual for First-Year Mathematics
Teacher's Manual for First-Year Mathematics is a book written by George William Myers, a Professor of the Teaching of Mathematics and Astronomy at the University of Chicago. The book is intended as a teaching manual for teachers instructing their students using a textbook called First Year Mathematics. Myers' book is intended as a companion piece to the textbook First Year Mathematics, released by the same publishing company, The University of Chicago Press. The book makes effort to assist the teacher by providing them with a detailed how-to regarding teaching the specific problems presented in the textbook. Teacher's Manual is presented in chapters, each corresponding to a chapter in First Year Mathematics. Specific references are made to page numbers and problems presented in the textbook. In total, the book contains fourteen different chapters. Teacher's Manual for First-Tear Mathematics can only be used in conjunction with the appropriate textbook. Without access to First Year Mathematics, the book is of no use. It is however an excellent companion piece to the textbook, and those able to access the original textbook will surely find this text to be highly beneficial. While a well-written teacher's manual, George William Myers' book assumes the reader has access to the original textbook. If you are interested in making use of this manual, do ensure that you are also able to access First Year Mathematics.
The Teacher's Hand-Book of AlgebraContaining Methods, Solutions and Exercises
This book - embodying the substance of Lectures at Teachers' Associations - has been prepared at the almost unanimous request of the teachers of Ontario, who have long felt the need of a work to supplement the elementary text-books in common use. The following are some of its special features:<br><br>It gives a large number of solutions in illustration of the best methods of algebraic resolution and reduction, some of which are not found in any text-book.<br><br>It gives, classified under proper heads and preceded by type-solutions, a great number of exercises, many of them illustrating methods and principles which are unaccountably ignored in elementary Algebras.<br><br>It presents these solutions and exercises in such a way that the student not only sees how Algebraic transformations are effected, but also perceives how to form for himself as many additional examples as he may desire.<br><br>It shows the student how simple principles with which he is quite familiar, may be applied to the solution of questions which he has thought beyond their reach.<br><br>It gives complete explanations and illustrations of important topics which are strangely omitted or barely touched upon in the ordinary books, such as the Principle of Symmetry, Theory of Divisors, Factoring, Applications of Horner's Division, &c.
This tract is intended to give an account of the theory of equations according to the ideas of Galois. The conspicuous merit of this method is that it analyses, so far as exact algebraical processes permit, the set of roots possessed by any given numerical equation. To appreciate it properly it is necessary to bear constantly in mind the difference between equalities in value and identities or equivalences in form; I hope that this has been made sufficiently clear in the text. The method of Abel has not been discussed, because it is neither so clear nor so precise as that of Galois, and the space thus gained has been filled up with examples and illustrations.<br><br>More than to any other treatise, I feel indebted to Professor H. Weber's invaluable Algebra, where students who are interested in the arithmetical branch of the subject will find a discussion of various types of equations, which, for lack of space, I have been compelled to omit.<br><br>I am obliged to Mr Morris Owen, a student of the University College of North Wales, for helping me by verifying some long calculations which had to be made in connexion with Art. 52.
The American Mathematical Monthly
Nevertheless, it is by no means true that we are without interest in the higher, technical, mathematical field. On the contrary, we have an interest that is far more vital than the mere supplying of technical papers which can be read only by specialists. We believe that large numbers who would become active and effective in higher mathematical research are now lost to the cause simply by reason of the fact that there are no intermediate steps up which they can climb to these heights. We believe that the Monthly has a mission to perform in holding the interest of such persons by providing mathematical literature of a stimulating character that is within their range of comprehension, and by offering an appropriate medium for the publication of worthy papers which the more ambitious among them may produce.<br><br>What we have tried to do. Having in mind the principles stated above we have during 1913 supplied 325 pages of matter, exclusive of the index to Volume XX, distributed as follows: papers involving subjects of historical interest, 87 pages; papers involving general information concerning the progress of mathematics, such as meetings of associations, book reviews, notes and news, 57 pages; topics involving pedagogical considerations, especially with regard to subject matter, 37 pages; papers involving a minimum of mathematical technicalities and dealing with topics of wide interest, 56 pages; papers of a somewhat more technical character in which, however, we have tried to have the technical terms explained for the benefit of the general reader, 38 pages; problems proposed and solved and miscellaneous questions involving difficulties actually encountered by our readers, 50 pages. We have thus tried to maintain an appropriate balancing of matter so as to conserve the interests of all our readers.<br><br>What we desire to do during the coming year. During 1914 it will be our endeavor to maintain the standards already established and to improve upon the past in every way possible. In order to do this we need the cooperation and constructive criticism of all our friends. For example, a certain reader whose opinion is greatly appreciated thinks that we should have more papers on topics in applied mathematics, and he immediately backs up his opinion by sending us a contribution which will appear in the March issue. That is what we mean by cooperation. The editors have no possible interest in this undertaking which should not appeal directly to every one who is really concerned for the development of mathematics in this country. Their responsibilities and burdens are self-imposed and without emolument, save for the satisfaction which may accrue from aiding in a cause in which they heartily believe. It is their ambition to make the Monthly render genuine service to every teacher of courses in college mathematics in this country, whether in academy, high school, normal school, college, or university; to stimulate to higher endeavor every student of mathematics, whether in school or not, who may be attracted by the papers, problems, questions or discussions published in the Monthly; and to win and hold the cooperation of all who can in any department render assistance in carrying out these plans.
The Principles of Mathematics
The Principles of Mathematics: Vol. 1 is a terrific introduction to the fundamental concepts of mathematics. Although the book's title involves mathematics, it is not a textbook packed with equations and theorems. Instead philosopher Bertrand Russell uses mathematics to explore the structure of logic. Russell's ultimate point is that mathematics is logic and logic itself is truth. The book is substantial and covers all subjects of mathematics. It is divided into seven sections: indefinables in mathematics, number, quantity, order, infinity and continuity, space, matter and motion. Russell covers all the major developments of mathematics and the contributions of important figures to the field. His sharp mind is evident throughout The Principles of Mathematics, as he challenges established rules and teachers readers how to think through difficult problems using logic. Russell was one of the great minds of the 20th Century. In this book he discusses how his ideas were influenced by the logician Peano. He also debates other philosophers and mathematicians, and even anticipates the Theory of Relativity, which had not yet been published by Einstein. One does not need to love mathematics to gain insights from The Principles of Mathematics: Vol. 1. Those who are interested in logic, intellectualism, philosophy or history will find significant insights into logical principles. Readers who desire an intellectual challenge will truly enjoy The Principles of Mathematics: Vol. 1.
The Principles of Mathematics
Bertrand Russell was a British logician, nobleman, historian, social critic, philosopher, and mathematician. Known as one of the founders of analytic philosophy, Russell was considered the premier logician of the 20th century and widely admired and respected for his academic work. In his lifetime, Russell published dozens of books in wildly varying fields: philosophy, politics, logic, science, religion, and psychology, among which The Principles of Mathematics was one of the first published and remains one of the more widely known. Although remembered most prominently as a philosopher, he identified as a mathematician and a logician at heart, admitting in his own biography that his love of mathematics as a child kept him going through some of his darkest moments and gave him the will to live. With his book The Principles of Mathematics, Russell aims to instill the same deep seated passion for mathematics and logic that he has carefully cultivated in the reader. He adeptly explores mathematical problems in a logical context, and attempts to prove that the study of mathematics holds critical importance to philosophy and philosophers. Russell utilizes the text to explore the some of the most fundamental concepts of mathematics, and expounds on how these building blocks can easily be applied to philosophy. In the second part of the book, Bertrand addresses mathematicians directly, discussing arithmetic and geometry principles through the lens of logic, offering yet another unique and groundbreaking interpretation of a field long before considered static. This book affords new insight and application for many basic mathematical concepts, both in roots of and application to other fields of scholarly pursuit. Russell uses his book to establish a baseline of mathematical understanding and then expands upon that baseline to establish larger and more complex ideas about the world of mathematics and its connections to other fields of personal interest. The Principles of Mathematics is a very captivating glimpse into the logic and rational of one of history's greatest thinkers. Whether you're a mathematician at heart, a logician, or someone interested in the life and thoughts of Bertrand Russell, this book is for you. With an incredible amount of information on mathematics, philosophy, and logic, this text inspires the reader to learn more and discover the ways in which these very disparate fields can interconnect and create new possibilities at their intersections.
▲ Back to Top |
Suppose you have a right triangle ABC, with angle C being the right angle. Thismeans the base of the triangle is a, the height is b and the hypotenuse is c. You can find the length of a side of the right triangle if the lengths of the other 2 sides are given by using the Pythagorean Theorem, which states that a^2 + b^2 = c^2 (a^2 means a raised to the second power, or a squared).
For example, suppose a = 3 and b = 5. To find c, we take 3^2 + 5^2 = c^2. 9 + 25 = c^2, 34 = c^2. By taking the square root of 34, we find that c is approximately 5.83.
If only 1 side of the right triangle is given but we know the measure of angle A or B, we can find the lengths of the other sides using trigonometry. The trigonometric function sine = opposite side divided by hypotenuse. Cosine = adjacent side divided by hypotenuse and tangent = opposite side divided by adjacent side.
For example, suppose in the same right triangle above, we know the measure of angle B = 35 and we know side a = 4. We can find the other sides using the trigonometric functions above. The side opposite angle B is b and the side adjacent to angle B is a with c being the hypotenuse.
Therefore we can use tangent B = opposite/adjacent = b/a.
Tangent 35 = b/4
4(tangent 35) = b.
Using a scientific calculator to solve for tangent 35, we get b = 2.8. We can get c by using cosine b. Recall that cosine = adjacent/hypotenuse.
Therefore cosine B = a/c
Cosine 35 = 4/c
c(Cosine 35) = 4
c = 4/cosine 35.
Using a scientific calculator to solve for cosine 35, we get c = 4.88.
Suppose we have the same triangle ABC, but this time it is not a right triangle. We can still solve for the lengths of the sides of the triangle by using either the Law of Sines or the Law of Cosines, depending on the situation.
If you are given the length of side and two angles (SAA) or two angles and the side in between them (ASA), you can use the Law of Sines, which states:
If A, B and C are the measures of the angles of a triangle and a, b and c are the lengths of the sides, then SinA/a = SinB/b = SinC/c.
Suppose we know A = 46 degree, B = 53 degrees and c = 14 inches. We can find a and b as follows:
SinA/a = SinC/c
We know C = 180 – (46 + 53) since the sum of the angles of a triangle equals 180. Therefore, C = 81.
Sin46/a = Sin81/14
14Sin46 = a(Sin81)
a = 14Sin46/Sin81, which is approximately 10.2 inches.
To get b, we use
SinB/b = SinC/c
Sin53/b = Sin81/14
14Sin53 = b(Sin81)
b = 14Sin53/Sin81, which is approximately 11.3 inches.
If you are given two sides and the angle opposite one of the sides (SSA), then it’s tricky to determine the length of the other side because there might be two triangles, one triangle or no triangle. This is known as the “ambiguous case”. The Law of Sines will determine the number of triangles and gives the solution for each.
For example, suppose A = 43, a = 90 and b = 55. We can find angle B by using
SinA/a = SinB/b
Sin43/90 = SinB/55
55Sin43 = 90SinB
SinB = 55Sin43/90
SinB is approximately 0.4167, which gives angles of 25 degrees and 155 degrees. If B is 155 degrees, then we already have more than 180 degrees when adding this to angle A (155 + 43 = 198). The only possibility then is for B to be 25 degrees. Therefore, angle C is 180 – (25 + 43) = 112. Now we can get c by using
SinA/a = SinC/c
Sin43/90 = Sin112/c
c(Sin43) = 90Sin112
c = 90Sin112/Sin43, which is approximately 122.
Here’s an example where the Law of Sines will show there is no solution.
Suppose A = 70 degrees, a = 55 and b = 74.
SinA/a = SinB/b
Sin70/55 = SinB/74
74Sin70 = 55SinB
SinB = 74Sin70/55
SinB = 1.26
The sine of an angle can never exceed 1, therefore there is no angle B for which SinB = 1.26. The triangle with the given measurements in the problem does not exist.
A third case is there there are 2 possible triangles. Suppose in triangle ABC, A = 38 degrees, a = 57 and b = 60. Find angle B using SinA/a = SinB/b
Sin38/57 = SinB/60
60Sin38 = 57(SinB)
SinB = 60Sin38/57
SinB = 0.648
There are two possible angles for B for which SinB = 0.648, 40 degrees and 140 degrees. If you add either angle to angle A, you will not exceed 180 degrees. Therefore there are two triangles possible. In the first triangle, angle C = 180 – (40 + 37) = 103 and in the second triangle angle C = 180 – (140 + 37) = 3.
In the first possible triangle, we can solve for c using SinA/a = SinC/c
Sin38/57 = Sin103/c
cSin38 = 57Sin103
c = 57Sin103/Sin38, which is approximately 90.2.
In the second possible triangle,
Sin38/57 = Sin3/c
cSin38 = 57Sin3
c = 57Sin3/Sin38, which is approximately 4.8.
The Law of Cosines is used when given two sides and the included angle (SAS) or given the lengths of all 3 sides. Since this article is about finding the lengths of the sides, we can exclude SSS and focus on SAS.
The Law of Cosines states that if A,B and C are the measures of angles of a right triangle and a, b and c are the sides opposite those angles, then a^2 = b^2 + c^2 – 2bcCosA, b^2 =a^2 + c^2 – 2acCosb, c^2 = a^2 + b^2 – 2abCosc.
Suppose we have triangle ABC and we are given A = 65 degrees, b = 18 and c = 27. We can find “a” by using the Law of Cosines.
a^2 = b^2 + c^2 – 2bcCosA
a^2 = 18^2 + 27^2 – 2(18)(27)Cos65
a^2 = 324 + 729 – 972Cos65
a^2 = 1053 – 411
a^2 = 642
a is approximately 25.3.
It’s easy to determine which of the 3 versions of the Law of Cosines to use. If you are solving for side a, use a^2 = b^2 + c^2 – 2bcCosA. If you are solving for side b, use b^2 = a^2 + c^2 – 2acCosB and if solving for side c, use c^2 = a^2 + b^2 – 2abCosC.
I hope this article showing how to find the sides of a triangle using the Pythagorean Theorem, Law of Sines and Law of Cosines is helpful. |
Models of self-propelled particles (SPPs) are an indispensable tool to investigate collective animal behaviour. Originally, SPP models were proposed with metric interactions, where each individual coordinates with neighbours within a fixed metric radius. However, recent experiments on bird flocks indicate that interactions are topological: each individual interacts with a fixed number of neighbours, irrespective of their distance. It has been argued that topological interactions are more robust than metric ones against external perturbations, a significant evolutionary advantage for systems under constant predatory pressure. Here, we test this hypothesis by comparing the stability of metric versus topological SPP models in three dimensions. We show that topological models are more stable than metric ones. We also show that a significantly better stability is achieved when neighbours are selected according to a spatially balanced topological rule, namely when interacting neighbours are evenly distributed in angle around the focal individual. Finally, we find that the minimal number of interacting neighbours needed to achieve fully stable cohesion in a spatially balanced model is compatible with the value observed in field experiments on starling flocks.
One of the prominent features of collective animal behaviour is the way animal groups manage to stay together in spite of predatory attacks and environmental perturbations . During aerial display of starlings, for example, flocks fly for almost an hour above the roost, in the presence of falcons and seagulls exerting continuous disturbances. In this respect, flocks exhibit a very efficient response, with a large degree of coordination and very robust cohesion. Flocks are an emblematic case of self-organized collective behaviour, where global patterns emerge from local interaction between individuals [2,3]. In this respect, a crucial question is: what kind of interactions are able to grant the group the robustness to perturbations that we observe?
Experimental results on flocks of starlings (Sturnus vulgaris) [4,5] gave some insight into the nature of the interaction between birds. In the study of Ballerini et al. , it was discovered that interactions are topological, each individual coordinating with a fixed number (approx. 7) of closest neighbours, irrespective of their distances. This result contrasted to what assumed by most models of self-organized collective motion, where metric interactions were used [6–12].
In the study of Ballerini et al. , it was argued that topological interactions grant more robust cohesion than metric ones, and are therefore more effective from an anti-predatory point of view. In the present work, we test this hypothesis. To this end, we resort to numerical models of self-propelled particles (SPPs) which have been extensively used in the last 20 years to study the emergence of order in polarized systems. Most of the past literature on flocking models was devoted either to characterize the onset of ordering [9,13–18], or to describe the features of the ordered phase [6–8,10,19–23]. Less attention was given to response and robustness to external perturbations, and to understand what determines at a microscopic level specific traits of the global behaviour. Besides, the greatest part of numerical analysis has been performed in two dimensions, dealing either with small finite groups or with fluids of SPPs.
However, to really understand what happens in real aggregations, we need to consider three-dimensional models and look at large finite groups of individuals. A few very recent works [24–28] implemented topological rules both in two- and three-dimensional models, but did not consider the question we want to address in this paper: what are the features of the microscopic interactions that grant robust cohesion to the group?
In the following, we focus on this issue. We consider a class of numerical models of SPPs in three-dimensions with different kinds of interactions, both metric and topological. We compare robustness of cohesions of the group under the effect of the noise and of external perturbations. Our analysis suggests that topological interactions perform better than metric ones, and that to achieve maximal stability, the topological interactions must be spatially balanced, distributing interacting neighbours evenly around each individual.
2. Summary of experimental results
The first empirically based results on the nature of the inter-individual interactions in flocks of birds were obtained in the last couple of years thanks to novel experimental and algorithmic techniques [29,30]. Stereoscopic experiments were performed in the field on flocks of starlings during aerial display above the roost. Three-dimensional positions and velocities of individual birds were reconstructed for flocks of up to a few thousands of individuals.
2.1. Topological interaction
A first statistical analysis focused on positions, quantifying how individuals in a flock are mutually positioned in space . It was discovered that the distribution of neighbours of a given bird is strongly anisotropic, closest neighbours being located significantly more on the sides than along the direction of motion. Using the degree of this anisotropy as a proxy of the interaction between individuals, it was discovered that interactions have a topological nature (figure 1): when considering flocks of different densities, the number of interacting neighbours does not display any dependence on the density. On the contrary, their metric distance depends strongly on the density; more precisely, the metric radius of interaction increases linearly with the mean nearest neighbour distance (called ‘sparseness’ in previous works; figure 1). This behaviour indicates that birds in a flock always interact with the same number of neighbours, independently of their distances. Further analysis of individual velocities using methods of statistical inference based on the maximum entropy approach , fully confirmed this conclusion (same figure). An estimate of the topological interaction range based on 22 flocking events [4,31] indicates that each bird interacts approximately with the 7±1.5 closest neighbours.
Despite the fact that interactions are local, flocks are able to achieve strong coherence on a large scale. Correlations both in orientations and in speed are scale-free , implying that the range of influence of each individual is much larger than the interaction range, and it extends over the whole flock. In other terms, even if each bird only interacts with the seven closest neighbours, its change of behaviour can influence even the furthest individuals. The mechanism through which this information propagation occurs, from interaction link to interaction link through the whole network, has been elucidated in Bialek et al. . Here, it was shown that mutual local alignment interactions between neighbours are sufficient to produce the velocity correlations that have been measured in natural flocks.
Local interactions with few neighbours are economic, and at the same time grant coherence at large scale. But why are interactions topological? What is the benefit of coordinating with the same number of individuals independently of their distance? And why is the number of interacting neighbours close to seven? First of all, we may note that estimating metric distances may be too costly for birds, especially during real time interaction. On the other hand, even topological interaction involves measuring distances, as the first nc neighbours are ranked in distance. Some discussion of these issues was provided in Ballerini et al. , where it was shown that topological interactions appear to be more robust than metric interactions in terms of cohesion of the group. Using some simple two-dimensional models of collective motion, two flocks, one with metric and the other with topological interactions, were prepared with the same initial conditions and then exposed to a predatory attack (modelled as a repulsive central force; see Ballerini et al. ). The topological flock exhibited a much stronger cohesion, giving rise to a lower number of subgroups and stragglers after the attack (figure 2).
These results suggest that topological interactions enhance robustness in cohesion, a crucial feature in the anti-predatory response of animal aggregations. To investigate this point further, however, a more systematic analysis is required. The numerical simulations described in Ballerini et al. were performed in two dimensions and with a given set of parameters (in particular the number of interacting neighbours and the noise strength). Morever, the model used in the study of Ballerini et al. did not have an attraction term in the equation, so that cohesion was somewhat difficult to assess. Natural flocks live in three dimensions, where cohesion is much more difficult to maintain owing to the larger number of degrees of freedom. Even if one expects topological interactions to outperform metric ones also in three dimensions, it is not a priori evident how strong in general this advantage in terms of robustness is. Moreover, it is not clear what is the role of the number of interacting neighbours and whether robust cohesive groups can be produced even with very small numbers of interacting neighbours. The remaining part of this paper is dedicated to investigate these questions. We will generalize the numerical analysis performed in Ballerini et al. to a three-dimensional model with attraction, and systematically check for robustness in cohesion in both metric and topological models of self-organized collective motion.
2.2. Pair radial correlation function
Given the non-trivial mutual arrangements of individuals, with a strong anisotropy in the angular distribution of neighbours, one might wonder whether mutual distances also obey some specific non-trivial distributions. There are many example of birds that fly in formation, with regular distances between neighbours. This is not the case for starlings. In this respect, in contrast, flocks are rather structureless systems, with individuals continuously exchanging positions and with a distribution of mutual distances lacking any structure. A good quantitative observable to pinpoint this behaviour is the so-called radial pair correlation function g(r), defined as the density of particles at distance r from a focal particle. More precisely, this function is defined in the following way, 2.1where δ(x) is Dirac's delta function. From the practical point of view, in order to compute the numerator in g(r), one counts how many pairs of points exist with mutual distance rij between r and r + dr, where dr is an arbitrary binning interval (see for the details of the definition, and in particular for the crucial point of how to deal with the border).
The radial correlation function g(r) is a very useful tool for distinguishing different phases of matter, be it standard physical matter or active matter. In a crystal, g(r) has very sharp and pronounced peaks; in the liquid phase, g(r) has many smoother, but well-defined peaks, corresponding to the shells around each particle. On the other hand, in a gas the g(r) is rather structureless, only showing a drop at small r corresponding to the hard core of the particles that cannot get too close to each other.
The form of g(r) in real flocks of starlings is shown in figure 3 . We clearly see that there is not much structure, very much like what one finds in a gas. This lack of structure may seem a rather unexciting result, but in fact it is important. As we shall see, we will use g(r) to fix the parameters of our simulations. This experimental constraint is not an easy one to match: it is very easy to get the radial correlation function wrong. Indeed, a possible way to enforce cohesion in a system of interacting particles is to introduce strong attraction between them. However, this leads to strong structures with crystalline or liquid-like g(r). This is not what real flocks do. Therefore, it seems that a ‘boring’ radial correlation function is a very non-trivial biological requirement that models have to match. Our aim in the present work was to investigate those features in the interaction that enhance cohesion and at the same time produce flocks with as featureless a radial correlation function as the natural ones.
3. Numerical models of self-organized collective behaviour
Many models of collective motion have been investigated in the last 10 years, by biologists [6–8,10,19,], physicists [9,11,13,14,16–18,20,22,] and control theorists [35,36]. Here, we focus on a class of such models, known as SPP models [9,22,24]. The first and most renowned among them is the Vicsek model , where point particles with constant speed move based on mutual alignment with neighbours and subject to noise. If we characterize each particle by its position xi and velocity vi, then the dynamical equations of motion read as 3.1
Here is the normalization function , is the (constant) speed of the particles and is a random vector delta correlated in particle index and time and uniformly distributed in the unit spherical surface . The parameter η tunes the amount of noise the particles are subject to. Finally, S(i) indicates the ensemble of the Ni interacting neighbours of particle i. In the original Vicsek model, S(i) is chosen following a metric rule: particle i interacts with all particles closer than a given metric interaction range rc.
Despite its minimal architecture, the Vicsek model exhibits non-trivial collective properties and gives rise, for low enough noise, to a polarized flow of moving particles. Still, when the volume available to the flockers increases (at fixed number of particles), even very small fluctuations can lead to flock dispersion and an initially ordered group soon dissolves in open space. In other terms, the Vicsek model is not able to produce finite polarized and cohesive aggregations of particles.
To overcome this problem, Grégoire et al. introduced an extension of the Vicsek model, where an attraction–repulsion term is added to fix the density of the aggregation. The velocity updating is modified as 3.2where the parameters α and β tune the mutual relevance of alignment and attraction/repulsion, and the attraction–repulsion force is given by 3.3Here, corresponds to the hard-core distance, re to the ideal equilibrium distance between particles, rc is the maximum interaction range and ra defines the distance beyond which becomes constant. The set of interacting neighbours S(i) is now defined as the set of Voronoi neighbours of bird i that are found within the range rc from bird i. In the study of Grégoire et al. , the authors fixed v0 = 0.05, , , (we will use the same values for these parameters throughout this paper).
A systematic analysis of the collective behaviour generated by equation (3.2) in two dimensions, together with a phase diagram, can be found in Grégoire et al. . Contrary to the original Vicsek model, this model produces, for appropriate values of the parameters α, β and η, cohesive groups also in open space (i.e. in the zero density limit). It therefore seems a good candidate to investigate stability and robustness in group cohesion. There are however a few issues to be further considered. We note that, according to the previous definition, model (3.2) has a mixed metric–topological character. Indeed, when the maximum interaction radius rc is big enough with respect to the average distance between particles, the ensemble of interacting neighbours coincides with the first Voronoi shell, which is defined independently of mutual distances, i.e. topologically. When rc decreases, however, actual distances start to be relevant and interactions become truly metric. Besides, even when rc is large, the number of interacting neighbours is uniquely determined by the Voronoi tessellation (approx. 15 in three dimensions) and cannot be tuned.1 Since we want to investigate the link between microscopic interactions, and number and positions of interacting neighbours and degree of cohesion, we would like a model where we can switch in a neat way from metric to topological interactions and we can tune the number of interacting neighbours. Besides, keeping in mind natural flocks, we want to focus on particles moving in three dimensions, rather than in two dimensions.
For all these reasons, we have generalized the model of equation (3.2). We have considered this model in three dimensions and have introduced three variants, where the ensemble of interacting neighbours S(i) is chosen with a different set of rules.
— Metric interactions: in this case, as in the original Vicsek model, the set S(i) consists of all the neighbours of bird i that are found within a given metric range rc around i.
— Simple topological interactions: here the set S(i) consists of the first nc nearest neighbours of bird i, irrespective of their distances. The attraction–repulsion force has the same form as in equation (3.3), but we set in equation (3.3) so that the force is applied to all—topologically selected—neighbours in S(i).
— Topological interactions with angular resolution: here interacting neighbours are chosen irrespective of their distances, but one requires that a minimal angular resolution μ exists between distinct neighbours in S(i). Given the bird i, when two (or more) of his neighbours fall within the same solid angle of width μ, only one of them is included in S(i) (figure 4). Note that fixing μ also fixes the average number of interacting neighbours : small values of nc correspond to large values of μ and vice versa. However, compared with the simple topological case, neighbours are now chosen in a balanced way. For example, if μ is very large (of order ), typically only two neighbours are included in S(i). However, owing to the angular constraint, they are bound to be in opposite sides with respect to bird i, while for simple topological interactions they can be arbitrarily close in angle.
In the following, we will study numerically these three variants of the model and investigate their resilience to noise and perturbations. All numerical simulations were performed on a graphics processing unit using Compute Unified Device Architecture.
Our primary objective is to study the stability, or resilience, of a flock against (i) noise and (ii) external perturbations, and to compare the three models introduced in the previous section according to such stability analysis. In order to make a comparison, though, we need to fix the parameters α, β and η. Moreover, we need to fix the parameter defining the interaction range, namely rc (metric), nc (topological) and μ (balanced). How can these parameters be fixed?
When comparing the resilience of different models to noise, η, we must of course use the same value of η in all three models, otherwise the comparison would be unfair. In this case, thus, we must use the same value also for all parameters other than noise. Concerning the range, this means fixing rc and μ such that the effective number of interacting neighbours, nc, is the same in all three models. This ‘equal parameters’ comparison is a neutral (and natural) path that we certainly must investigate.
However, using equal parameters is not the right thing to do when we test the stability against external perturbations. The three models are different, and therefore parameters with the same values may imply different biological observables. Hence, the second criterion we will adopt will be to use for each model a different set of parameters (a sort of optimal set) that ensures a realistic value of polarization and cohesion, and as realistic as possible a radial correlation function, g(r). Once this calibration to the biological observables is done, we will proceed with the comparison of the models' performance against external perturbation.
4.1. Stability against noise
Let us start by studying stability against noise in the equal parameters approach. We select the parameters in such a way to have an initially cohesive and moving flock (see figure 5 for the parameters' values). To assess stability, we let the system evolve following equation (3.2) and check the degree of cohesion after a given, large, number of simulation steps. Of course, how large this time T is rather arbitrary. However, given that we are adopting an equal parameters comparative approach, the important point is that this time be the same for all three models.
As a measure of cohesion, we use the number of CCs into which the initially cohesive flock spontaneously splits after time T. A CC is defined as a coherent group of particles that is found within a threshold of equal distance one from the other. A large number of CC implies low stability, and vice versa. Each experiment is repeated 400 times and averages are performed over all these runs.
Results are summarized in figure 5, where the histogram of the number of CC is displayed. As we can see from this figure, both topological models are more stable than the metric one, giving rise to a smaller number of sub-groups. However, we note that the simple topological model, despite performing better than the metric one, exhibits limited stability in terms of cohesion. Indeed, the mere presence of noise is by itself sufficient to break an initially cohesive group into independent components in the long run. On the contrary, the balanced topological model appears more robust, keeping cohesion and not breaking the group even after a very long time.
4.2. Stability against external perturbation
One possible objection at this point is that comparing the models at the same value of the parameters penalizes some models too much with respect to others. For example, it is well known that metric models can produce cohesive unperturbed flocks . Hence, the result of figure 5, where the metric model loses cohesion with no external perturbation, may seem odd, or may be due to an unreasonable choice of the parameters.
It seems fairer to fix the parameters independently in each model in such a way to obtain the optimal performance and the most realistic phenomenology for that particular model, and to make the comparison of the three models only after such optimization. More specifically, we tune parameters in such a way to have the same (strong) polarization and cohesion in all models, and to match as better as possible other biological features, such as the radial correlation function. This is an ‘equal observables’ comparison.
We proceed as follows. First, for each model, we fix α and η (actually, their ratio) in such a way to have large polarization, , which is a reasonable biological value (polarization is very large in real flocks, see Cavagna et al. ). Secondly, we fix the interaction range (nc in the topological case, rc in the metric case, and μ in the balanced case) such that cohesion is strong, that is the average number of (unperturbed) CCs is approximately 1. Once polarization and cohesion are granted, we finally try to optimize the last parameter, namely the attraction strength β, in such a way to have a biologically plausible radial correlation function, g(r), characterized by a clear drop at the small r hard-core, a weak bump in correspondence of the first shell of nearest neighbours and no other structure for larger r. When all parameters have been fixed in such a way to have biologically consistent observables, we can make a fair comparison of the stability under external perturbation.
In figure 6, we report the radial correlation function g(r) in all three models, for those parameters that make this function as close as possible to the biological ones, figure 3. Even though the no model is too far from a biologically plausible g(r), we also see that in the metric case, the radial correlation function is not fully satisfying, as we cannot avoid getting excessive structure, in the form of a regular modulations, beyond the first shell of neighbours. Although this difference is not enormous, it is interesting to note that we were unable to eliminate such excessive structure and make the metric g(r) as biologically plausible as the topologically balanced one. If we try to do that by moving one of the parameters, we significantly decrease either cohesion or polarization, or both (we recall that Φ = 0.99 and in all three models, for the parameters in figure 6).
Now that we have calibrated the models on the real biological observables, we can make a meaningful comparison of their resilience to an external perturbation. We perturb the flock by placing an obstacle along its initial direction of motion. As before, we take as a cohesion indicator the number of CCs in which the originally cohesive flock splits after encountering the obstacle. The results are presented in figure 7. We find that the stability of the topologically balanced model against external perturbation is significantly larger than the purely topological and metric model. The latter has quite a poor stability, compared with the two topological models. We remark that now we can no longer tune the parameters to enhance the stability of the models, as all parameters have been fixed by imposing the constraint on polarization, unperturbed cohesion and radial correlation function.
We finally note that the values of the parameters at this optimal—‘equal observables’—point have some interest per se. In particular, let us make a comparison between the parameters of the metric and the topologically balanced case. In order to achieve unperturbed cohesion and high polarization, we need a number of interacting neighbours in the metric case that is twice as much than in the balanced case (nc = 21.2 versus nc = 8.8), and an attraction parameter which is larger by a factor 10 (β = 0.5 versus β = 0.06). Unsurprisingly, then, the metric g(r) is more structured than the topologically balanced one. This seems a key feature of topological interaction: it grants a good (unperturbed) cohesion even with a low strength attraction force, thus yielding a positionally structureless flock, similar to the real biological case. The second key advantage, as we have seen, is quite a good stability under external perturbation.
4.3. Geometric instabilities in the metric and purely topological model
The lower resilience of the metric and of the purely topological model described above are due to the presence of geometric instabilities. Such instabilities are different in nature in the two models, but both make the flock susceptible of fragmentation, although to a different quantitative degree, as we have seen. We can qualitatively understand the origin of these instabilities by looking at the sketch in figure 8. In the metric case (figure 8a), fluctuations caused by the noise term or by external perturbations may push one particle (or a few of them) beyond the metric range rc from its neighbours, making it lose interaction with the rest of the group; in this way, disconnected components may be created. We also note that in the metric case, the ‘evaporation’ of individual particles from the border is one of the main paths to losing stability, giving rise to the kind of histogram we have seen in figures 5 and 7.
In the simple topological case, the evaporation of one single particle cannot occur, since each particle interacts by definition with its first nc neighbours, independently of their distance. However, what may happen in this case (figure 8b) is that a fluctuation instantaneously separates a sub-group of at least nc + 1 individuals from the rest of the flock: these individuals will interact among themselves but not with others, so that again the aggregation may split. Therefore, even though the simple topological model is not susceptible to the creation of isolated individuals, it may still lead to fragmentation of the flock into subgroups of size larger than nc. Such group separation is more rare a phenomenon than the single particle evaporation, though; for this reason, the topological stability is higher that the metric one.
It was exactly to solve this shortcoming of the purely topological interaction that we introduced a topological model with spatially balanced angular resolution (figure 8c). In such models, sub-groups are not allowed, owing to the angular threshold in the interactions that forces each particle to select interacting neighbours with evenly distributed angles. The simple topological rule, in contrast, does not take into account the spatial distribution of neighbours: if all the first nc neighbours of a given individual are on the same side, that individual will ignore completely individuals on the other side. In this way, as we have seen, sub-groups of size nc become stable and independent whenever they form owing to noise or external perturbations. All this is avoided in the spatially balanced model.
Note, finally, that when using the Voronoi rule to select neighbours [14,22,24], one is effectively running a topologically balanced interaction. The reason for this is that in the Voronoi construction, the ranking in distance of the neighbours is overruled by the topological requirement to have cells all around the focal point. Therefore, Voronoi neighbours are naturally distributed evenly in space around each point. As we discussed earlier, though, here we do not use a Voronoi rule because we want to be able to tune the number of interacting neighbours nc, while in Voronoi this number is fixed.
4.4. Filaments in the topologically balanced model
Even the balanced topological model exhibits, for low values of nc, a geometric instability, although of a weaker nature than the other models. Such instability consists of the formation of linear structures of particles (filaments) connecting sub-flocks moving in different directions. An example of such structures is shown in figure 9, where we present a snapshot from a simulation with a large value of μ (i.e. a small number of interacting neighbours). We note that filaments were also observed in the study of Grégoire & Chaté . A possible mechanism for the formation of these filaments is displayed in the same figure. Even if the flock is, strictly speaking, connected, the sub-groups separated by the filaments have different polarizations and do not move coherently. It is therefore necessary to give a definition of stability that takes into account the formation of these structures. What is (if any) the minimal value of interacting neighbours nc such that filaments do not form and there is a unique, fully three-dimensional, coherently moving flock?
The minimum value of nc for which full stability (no filaments) is attained surely will depend on the value of the parameters. When the noise is low, cohesion can be achieved with a smaller number of interacting neighbours. Increasing the strength of the attraction force has the same effect. When exploring the parameter space, we must proceed as in the previous sections, namely enforce the constraint that simulated flocks must reproduce the same behaviour as that observed in natural groups (polarization and radial correlation function), so that not all combinations of parameters are equally sensible. Too strong an attraction term, or too low a noise, lead to crystalline flocks , where individuals occupy fixed mutual positions, while an important feature of real flocks is that individuals diffuse one with respect to the other and that, as we have seen, the radial correlation function g(r) is definitely non-crystalline.
To investigate the minimal number of interacting neighbours, we calculate the fraction of particles belonging to filaments as a function of nc, for different values of the parameters and for two different sizes (figure 10).
At the value of nc where this fraction is zero, the flock becomes fully cohesive and no linear structures arise, and so this is the minimal nc we are looking for. We varied the parameters and the size in such a way to have the maximal spread in the minimal nc. From figure 10a, we can see that cohesive and stable flocks can be obtained with a number of interacting neighbours between 5 and 10, not far from the value of nc estimated from experimental data . As a consistency check, we calculated the radial correlation function at these values of the parameters: we see from figure 10b that we get a g(r) very similar to the biological one.
Empirical data on large, natural flocks of starlings in the field revealed that interactions between birds in a flock are topological, each individual interacting with a fixed number of neighbours, independently of their metric distances. It is also possible that other animals performing collective motion, such as some fish species, and even pedestrians , use a topological interaction. We can therefore ask what are the advantages, in terms of collective behaviour, of a topological interaction. The idea put forward in Ballerini et al. is that topological interaction grants a more robust cohesion of the group. In this paper, we have tested this hypothesis by comparing metric and topological interactions in a three-dimensional model of self-organized collective behaviour.
Our analysis confirms that topological interactions perform better than metric ones. The problem with metric interaction is that individuals can easily drop out of the interaction range, hence losing contact with the rest of the group. However, even the purely topological model is unstable with respect to fragmentation into sub-groups of size nc. The only way to respond to such instabilities, for both metric and purely topological models, is to increase the number of interacting neighbours, which may lead to flocks that are cohesive but with too strong a structure compared with the real ones. The radial correlation function, which has been measured in real flocks, is the main tool we used to check particles' positional structure within the flock.
On the other hand, we found that using a topological rule that is balanced in space, where neighbours are selected topologically, but at the same time they are evenly distributed in angle, it is possible to achieve robust cohesion also with a small number of interacting neighbours, still preserving a realistic structure for the flock, namely a realistic g(r). When we fix parameters independently in each model in such a way that all three models have realistic polarization and structure, and high unperturbed cohesion, it turns out that the topologically balanced mode has the highest stability against external perturbation.
When interacting neighbours are selected by using Voronoi tessellation (as done in Grégoire et al. and Chaté et al. ), one is enforcing a topological rule that is automatically balanced in space. Indeed, in the Voronoi case, the angular resolution between neighbours is determined by the Delaunay triangulation, whose net effect in terms of spatial balancing of the neighbours is not dissimilar from what we have done here. The number of Voronoi neighbours in three dimensions is . Hence, according to our results here, the Voronoi rule must produce full cohesive flocks (figure 10), which is indeed what has been found in numerical simulations [14,27]. We checked explicitly that selecting Voronoi neighbours is equivalent to choosing an angular threshold μ such that .
In order to produce cohesive groups in open space, models need an attraction term in the equation of motion [6,8,10,11,22]. Clearly, the stronger this force, the more robust the cohesion is. However, there is a downside to this: when attraction becomes too strong, the pair correlation function g(r) becomes too structured, developing several peaks, similar to a normal liquid, or, worse, to a crystal, whereas it has been shown that flocks have a nearly structureless, almost gas-like, g(r). Hence, attraction cannot be increased indefinitely to grant better cohesion. Our results show that if neighbours are chosen according to a topologically balanced rule, cohesion can be enhanced without the need of an overly strong attraction force, which introduces spurious structure in the flock, and with a number of interacting neighbours, , consistent with previous experimental estimates.
Understanding under what conditions a finite aggregation of interacting individuals retains a polarized cohesive structure is a fundamental issue both for biological groups and for artificial systems. In this work, we focused on one aspect of the problem, namely cohesion, and investigated how selection of neighbours determines robustness and stability of cohesion against noise and perturbations. Another issue concerns polarization: one can ask whether some specific values of nc might grant optimal robustness to global ordering of the group. Work in this direction indicates that this is indeed the case (G. Young, L. Scardovi, A. Cavagna, I. Giardina & N. Leonard 2012, unpublished data).
This work was supported in part by grants IIT–Seed Artswarm, ERC–StG n.257126 and AFOSR–Z80910.
One contribution of 11 to a Theme Issue ‘Collective motion in biological systems: experimental approaches joint with particle and continuum models’.
↵1 In fact, when using Voronoi cells, one could introduce an ‘interaction rate’, in which each particle interacts with a Voronoi neighbour with a certain probability. One could then use this probability as a tuning parameter, in place of the number of interacting neighbours nc.
- Received April 30, 2012.
- Accepted July 16, 2012.
- This journal is © 2012 The Royal Society |
Is range a measure of variation?
Range. The range is the simplest measure of variation to find. It is simply the highest value minus the lowest value. Since the range only uses the largest and smallest values, it is greatly affected by extreme values, that is ” it is not resistant to change.
Is range a measure of center?
These measures of center all use data points to approximate and understand a “middle value” or “average” of a given data set. Two more measures of interest are the range and midrange, which use the greatest and least values of the data set to help describe the spread of the data.
Is range a measure?
The range, the difference between the largest value and the smallest value, is the simplest measure of variability in the data. The range is determined by only the two extreme data values.
Is range a central tendency?
Range, which is the difference between the largest and smallest value in the data set, describes how well the central tendency represents the data. If the range is large, the central tendency is not as representative of the data as it would be if the range was small.
Which is not a measure of central tendency?
Which is best measure of central tendency?
Is variance a measure of central tendency?
Three measures of central tendency are the mode, the median and the mean. The variance and standard deviation are two closely related measures of variability for interval/ratio-level variables that increase or decrease depending on how closely the observations are clustered around the mean.
Is variance a measure of dispersion?
In statistics, dispersion (also called variability, scatter, or spread) is the extent to which a distribution is stretched or squeezed. Common examples of measures of statistical dispersion are the variance, standard deviation, and interquartile range.
What is the difference between what a measure of central tendency tells us and what a measure of variability tells us?
Measures of central tendency give you the average for each response. Measures of variability show you the spread or dispersion of your dataset.
What is the most frequently used measure of variability?
What is measure of variation?
What are measures of variation? Measures of variation describe the width of a distribution. They define how spread out the values are in a dataset. They are also referred to as measures of dispersion/spread.
Does the median represent the center?
The median is the value in the center of the data. Half of the values are less than the median and half of the values are more than the median. It is probably the best measure of center to use in a skewed distribution. Once the depth of the median is found, the median is the value in that position.
What is an advantage of using the range as a measure of variation?
The range is the difference between the largest and the smallest observation in the data. The prime advantage of this measure of dispersion is that it is easy to calculate. On the other hand, it has lot of disadvantages. It is very sensitive to outliers and does not use all the observations in a data set.
What is an advantage of using the range as a measure of variation quizlet?
The advantage of the range is that it is easy to calculate. The disadvantage is that it uses only two entries from the data set. Why is the standard deviation used more frequently than the variance? (Hint: Consider the units of the variance.)
Why is the standard deviation preferable to the range as a measure of variation?
The standard deviation is preferable to the range as a measure of variation because the standard deviation takes into account all of the observations, whereas the range considers only the largest and smallest ones.
What is the use of range in statistics?
In statistics, the range is the spread of your data from the lowest to the highest value in the distribution. It is a commonly used measure of variability. Along with measures of central tendency, measures of variability give you descriptive statistics for summarizing your data set.
What is use range?
The range is the size of the smallest interval (statistics) which contains all the data and provides an indication of statistical dispersion. It is measured in the same units as the data. Since it only depends on two of the observations, it is most useful in representing the dispersion of small data sets.
What is range measurement?
The measuring range is the range of measured values for a measurand in which defined, agreed, or guaranteed error limits are not exceeded. It is delimited by a lower and an upper measuring range limit that define the measuring span. Measured values are used in metrology.
What is range of accuracy?
Accuracy, Resolution, and Range (definition) Accuracy is the ability of a measurement to match the actual value of the quantity being measured. Resolution is the number of significant digits (decimal places) to which a value is being reliably measured. Range is the amount or extent a value can be measured.
What is range and span?
Span ” It can be defined as the range of an instrument from the minimum to maximum scale value. Range ” It can be defined as the measure of the instrument between the lowest and highest readings it can measure. A thermometer has a scale from ’40°C to 100°C. Thus the range varies from ’40°C to 100°C.
What is accuracy in measurement?
Definition 1: accuracy in the general statistical sense denotes the closeness of computations or estimates to the exact or true values. NOTE 3 ‘Measurement accuracy’ is sometimes understood as closeness of agreement between measured quantity values that are being attributed to the measurand. …
Begin typing your search term above and press enter to search. Press ESC to cancel. |
There are 3 types of levers they differ by the position of the pivot point and the different places where force is applied up and down wards. Where the different force and distance come into the equation. Lever 1 is like your neck it’s the same pivot point as the level. This is the type of lever that is used in making a seesaw. This type of lever is an equal one. It’s the only lever that the pressure on the both sides can be the same to have no movement where the other levers need to have more pressure on one side of it to have no movement. Lever 2 is the lever that is used to lift heavy objects.
This is the lever that is used to lift heavy object this is the crane. It’s also found on your foot. The way you roll off your foot makes its it this type of lever. Lever 3 is the most commonly used lever in the human body in sporting actions. In weight lifting your pivot point would be your elbow and the forces applied would be the weight and the effort applied. The placement of the pivot point is what makes the lever system affective. The further away the pivot is from your applied force the better it is as it requires less effort, in this way it make a seesaw action.
On a seesaw the centre point is the pivot point that is why it’s a fair ride the other person doesn’t have an advantage. This is why if the pivot point is away from the force down and near your force pushing up it increases its difficulty, which means you need to apply extra effort for the same effects. In my Test I am using lever 3, as it is my elbow joint. I will be measuring the distance and then comparing them to the strength and writing it down. I hope that this test will show me that there less effort needed the further away the weight is from the pivot point.
Task 3 I have preformed tests on 2 subjects both doing different sporting actions. The first is a football kick from standing. In this test analysing the movement and the velocity has shown my subjects 1 angle of acceleration In this test I have 7 pages of calculation of the movement that subject 1 did in his kicking action. All the frames are 0. 12 seconds. And on the sheet 133mm distance equals 1m real size (133mm = 1m). The subject’s leg doesn’t move on the first frame it stays in the same position no angle of movement. In my first frame the velocity is 10.
59ms-1 and the angle of movement is 0? s. There’s no change in angle movement until frames 3+4 when the angle changes to 0. 01? s. Changes again in the next frames 5+6 333. 33? s it caries on changing and increasing till frames 8+9 when it becomes the recovery stage when the angle becomes -166. 67? s as it is coming back to the starting place. And in the final frame it becomes -250? s as it is coming back the starting point. In my calculation my aim was to use trigonometry to calculate the distance of A when B and C are squared and then added and subtracted to equal the length of A.
on Frames 1+2 B was 5mm and C was 16mm. Then I shared them by what a meter is on this sheet when the meter bar was used in the filming the actions with subject 1 and 2. In this first frames they ended up as B=0. 04m and C=0. 12m. Then I squared them and got the answers B=1. 60m and C=0. 0144m. And then added them together and got 1. 16144m. After that I square rooted to get the distance of A=1. 27m when I get this number I can do the calculations for the velocity which is 12. 7 /0. 12 (frame size) = 10. 59ms-1. The velocity on each frame changes unlike the angle of action.
In frames 2+3 A=0. 1638m so the velocity is 1. 365ms-1. On frames 3+4 A=0. 342m and velocity is 2. 85ms-1. It carries on till it reaches the optimal velocity which is 7. 614ms-1 and A=0. 9137. In frames 8+9 A=0. 564m and velocity which has dropped dramatically from the previous frames is now 4. 703ms-1. It drops even further in the final frames 9+10 A=0. 2m and velocity is a dismal 1. 67ms-1. The last two frames are the recoveries faze which is why the angles of movement are a negative not a positive. The second done by subject 2 was of a football throw-in.
In this test I have 15 pages of calculation of the movement that subject 2 did in his throw-in action. All the frames are 0. 12 seconds. And on the sheet 135mm distance equals 1m real size (133mm = 1m). The subject’s arms don’t really move on the first frame it stays in the same position with only 0. 0142? s angle of movement. In my first frame the angle of movement is 0. 0142? s. There’s no sudden change in angle movement until frames 5+6 when the angle jumps to 16. 67? s. A change again in the next frames 9+10 25? s.
It caries on changing and increasing till frames 11+12 when it jumps again to 208. 33? s and the velocity is 1ms-1. It carries on till the action is suddenly has major forward movement 458. 33? s and 2. 1ms-1. The recovery stage is when the angle becomes a negative in this subject its frame 18+19 and its -1083. 33? s as it is coming back to the starting place. In my calculation my aim was to use trigonometry to calculate the distance of A when B and C are squared and then added and then square rooted to equal the length of A. on Frames 1+2 B was 16mm and C was 5mm.
Then I shared them by what a meter is on this sheet when the meter bar was used in the filming the actions with subject 1 and 2. In this first frames they ended up as B=0. 1185m and C=0. 0. 37037m. Then I squared them and got the answers B=0. 014m and C=0. 0137m. And then added them together and got 0. 0277m. After that I square rooted to get the distance of A=0. 1664m when I get this number I can do the calculations for the velocity which is 0. 1664 /0. 12 (frame size) = 1. 39ms-1. The velocity on each frame changes unlike the angle of action.
In frames 3+4 A=0. 1638m so the velocity is 0. 58ms-1. On frames 4+5 A=0. 342m and velocity is 3. 529ms-1. It carries on till it reaches the optimal velocity which is 2. 418ms-1 and A=0. 29m. It increase further as the velocity is in the recovery faze A=1. 280m and velocity is a dismal 10. 67ms-1. Task 4 Abstract This assignment is on my testing of 2 subjects on 2 different sporting actions. I look at angle of acceleration, velocity and length of A. In my Task 1 I have spoken about 3 different sporting actions I have chosen them all from rugby.
One is a drop kick anther is pass and the final one is a tackle. I got all of these of the BBC web site. I analyse all of their movements in these action which could contribute to a good action. Task 2 is about the lever system and how it is in the body and how it effects how everything works in the sporting world. Task 3 is all about the 2 tests that I carried out one a football kick and the other is a football throw-in. I analyse their acceleration and velocity to see how their ability fairs. At the end my report on what I have discovered by doing my test and doing this assignment.
Report to the director of WIS. This report is regarding my testing of 2 subjects and seeing their ability level and their techniques. Subject 1 is a 17 year old and has been playing football for about 14 years. Subject 2 is 18 and has also been playing about 14 years. Subject 1 did the football kick. My calculations show how his velocity and angle of acceleration is his kicking action. Subject 2 did the football throw-in. On my calculation for this section they are like a yo-yo up and down not able to estimate the ultimate velocity or acceleration. |
This article addresses possible sources of error when strain gauges are used in experimental stress analysis and shows how to successfully assess measurement uncertainty already in the design stage
Strain gauge technology with its ample opportunities for error compensation has been optimized for decades. And yet there are influences that may affect strain gauge measurements.
The aim of this article is to point out the many (often avoidable) sources of error when strain gauges are used in experimental stress analysis and to provide assistance so that measurement uncertainty can be assessed already in the design stage.
Fundamental questions on your measurement setup
The following observations that may be useful prior to taking strain gauge measurements in experimental stress analysis are to summarize the authors' experiences. The following questions are essential to the required measures (e.g. measuring point protection) and the measurement uncertainty that can be obtained:
- When will the measuring point reach the end of its useful life?
- How high will the strain values be?
- Will there be any temperature variation? If yes, how great and how fast?
- Will special environmental influences (water, humidity, etc.) affect the measuring point?
- What material is the strain gauge being installed on (inhomogeneous, anisotropic, highly hygroscopic, etc.)?
- Is there any possibility to readjust the zero point, if necessary?
The experienced test engineer will be looking for the answers already when analyzing the measurement task (long before the first strain gauge is being installed). The answer to the last question decides whether the measurement is
- zero-point related or
- non zero-point related.
Zero-point related measurements are generally understood as measurements involving comparison of current measured values with measured values obtained at the start of measurement over several weeks, months or even years. No "zero balancing" of the measurement chain is performed in the meantime. Zero-point related measurements are far more critical than non zero-point related measurements, because zero drifts (resulting from temperature and other environmental influences) are fully incorporated into the result of measurement.
Zero errors are particularly dangerous with small strain values, because this results in very large relative deviations related to the measured value. Strains occurring in machine components and structures often do not even amount to 100 µm/m, because a high safety factor is "built in". 100 µm/m zero drift, in this case, results in 100 % measurement error.
Due to the fact that a continuous measurement for structural monitoring is almost always a zero-point related measurement, special attention needs to be paid to protecting the strain gauges from environmental influences. It is essential that the measuring point offers sufficient long-term stability. Since large temperature variations have to be expected, the temperature coefficients need to be small. Low measurement signal amplitudes at generously dimensioned components are likely to be superimposed by effects resulting from deficient strain gauge installation. The measurement electronics responds to every change in resistance with a change on its display.
This may be due to the change in the quantity to be measured or, also, the ingress of water molecules. The actual measured value, as the aggregate signal of all strain proportions at the strain gauge, does not allow a distinction to be made between wanted and unwanted strain proportions.
Non zero-point related measurements are understood as measurement tasks that allow zero balancing without any information loss at specific points in time. Only the variation of the measured quantity after "zero balancing" is relevant. (Modern bathroom scales are automatically tared every time they are switched on, without any loss of information.) "Zero balancing" is often possible with one-off load tests (often in the form of short-term measurements), hence zero drifts are totally insignificant.
Very high strains occur in destructive tests, which means that strain gauges with adequate measuring ranges are required. It is embarrassing and costly when after weeks of preparatory work it becomes obvious that the strain gauges installed at the component have failed.
Measurements in laboratories and test halls are considered rather uncritical, because the ambient conditions (temperature, humidity) are moderate.
Measurements in the field and in environmental chambers with high humidity and large temperature gradients, however, are critical.
Experimental stress analysis enables mechanical stresses in components to be measured. Experimental stress analysis can be performed to measure stress due to three types of causes: external forces, residual stresses, and thermal stresses.
Loading stress is due to forces applied from outside that cause material loading. Residual stress is due to internal forces in the material, without any external forces being involved. Residual stress arises from non-uniform cooling of cast components, forging, or welding. Thermal stresses occur in systems in which parts with different thermal expansion coefficients are used. They can arise if free thermal expansion of the components is prevented, or as a result of non-uniform heating in the same way as loading stress.
Depending on their absolute value and sign, residual and thermal stress can reduce a component’s loading capacity with respect to external loads.
HBM lets you measure and predict with confidence. To browse strain gauges and accessories for strain measurement, click here.
For purposes of clarity and comprehensibility, only the uniaxial stress state will be considered below. The block diagram (Fig. 1) shows the flow of the measurement signal. It also shows the influence quantities and their effect in correlation with the important features of the measurement chain. These features and effects are shown in blue if they can affect the zero point.
The measurement object (DUT)
When the measurement object under examination is loaded, the stress σ is exerted in the material. This causes a strain in the material which behaves inversely proportionally to the modulus of elasticity. This material strain can be determined as a surface strain by means of a strain gauge.
The modulus of elasticity exhibits an uncertainty (tolerance of the modulus of elasticity). Extensive examinations on structural steels have shown a variation coefficient of 4.5%. The modulus of elasticity also depends on temperature as an influence quantity and the temperature coefficient of the modulus of elasticity.
If the strain gauge is glued to a surface (such as a bending rod) that is extended convexly, the strain on the measuring grid is greater than on the surface of the component.
The reason for this has to do with the distance from the neutral fiber: The further the measuring grid is from this neutral fiber and the thinner the component, the stronger the measured value becomes. Smaller roles are played by the thickness of the adhesive and the structure of the strain gauge. The change in temperature (∆t) acting together with the temperature coefficient of expansion of the material also causes thermal expansion, which is significant for zero-point related measurements.
Elastic after-effects (caused by relaxation processes in the microstructure of the material) cause the strain of the material to diminish somewhat after spontaneous loading. The formula in the chart exhibits several uncertainties.
The required input quantity is the material strain. In an ideal case it is identical to the actual strain of the measuring grid on the strain gauge:
In actual practice, however, alignment and other installation errors occur despite great care. The strain gauge, as a spring element subject to mechanical stress, creeps back along its outer edge areas after spontaneous strain due to the strain loading and also depending on the rheological properties of the adhesive and the strain gauge carrier. It also exhibits a slight hysteresis the effect of the strain gauge creeping back is used in transducer construction to minimize material after-effects, which produce an undesirable additional strain, by adjusting the lengths of the transverse bridges not sensitive to strain on the strain gauge. This compensation can only be implemented in experimental stress analysis with a great deal of effort. Increased strain may also occur due to a curved installation surface (see above).
If measuring points are not adequately protected against humidity and moisture, the adhesive and carrier may soak up moisture and swell. This will be expressed as an error fraction in the form of an unintended task-specific strain in the strain gauges.
Moisture content also affects the stability of the measured values as in all methods of measurement (see below strain gauge: insulation resistance). Especially with zero-point related measurements, a test engineer may be uncertain whether he/she is observing the relevant material strain or whether it is simply one of the other effects described above. Because of this, measuring point protection is an essential precondition for reliable results, especially with zero-point related measurements.
This produces the effect that the strain of the measuring grid does not exactly match the material strain in the stress direction.
The strain gauge
The strain gauge converts the strain in the measuring grid into a relative change in resistance proportional to the strain.
The tolerance of the K factor and its temperature sensitivity contribute to the uncertainty.
It should be noted that if the strain is not distributed homogeneously, the average of the strain under the measuring grid is converted into the relative change in resistance. As a result of this, if the wrong active length of the strain gauge is chosen, the values measured for strain and material stress will be too small or too large. This is especially important when determining the maximum values of the mechanical stress peaks metrologically.
The temperature response of the strain gauge affects the zero point. It has an impact with large temperature differences and especially with strain gauges that are poorly adapted to the thermal expansion coefficient of the material (DUT), since they interfere with the action of the compensation effects.
Self-heating (due to electrical power transformed in the strain gauge) has a similar result, as it leads to a temperature difference between the material and the strain gauge. This is the reason why it is possible to set very low excitation voltages on modern measuring amplifiers. Even small bridge output voltages can be accurately amplified by the devices. Caution is advised, however, with thin materials and materials that dissipate heat poorly.
In the case of frequent alternating strain with a large amplitude (> 1500 µm/m) fatigue may occur in the measuring grid material, resulting in a zero drift.
A transverse sensitivity of the strain gauge is present, but it does not produce any significant deviations. In the uniaxial stress state the transverse sensitivity is taken into consideration by the experimental determination of the K factor due to the way the factor is defined.
A linearity deviation of up to 1000 µm/m is negligible for strains.
Penetration of moisture and humidity reduces the insulation resistances, which in turn causes a resistance shunt to the connections of the strain gauge and is generally reflected by instability in the display of measured values. Low-ohm strain gauges are less sensitive to the influence of moisture and humidity.
The data acquisition unit
The input quantity into the measuring amplifier is the relative change in resistance of the strain gauge.
Since it is very small (at 1000 µm/m and with a K factor of 2 it is just 0.2 % or 0.24 Ω out of 120 Ω), there is an addition to the Wheatstone bridge (quarter bridge circuit) in the experimental stress analysis by means of three fixed resistors (usually in the measuring amplifier). The advantages of half- and full-bridge circuits and ways to use them to reduce measurement uncertainties will not be dealt with here.
The connection of a single strain gauge in a quarter bridge circuit is considered here. Usually the correlation between bridge unbalance and relative change in resistance is described with
The actual correlation exhibits a small degree of non-linearity, which will be examined in greater detail below.
The measuring amplifier supplies voltage to the bridge circuit, amplifies the bridge output voltage and generates the measured value.
Deliberately left out of consideration here are measurement errors that can occur due to long supply lead resistances, interference fields, thermoelectric voltages and the measurement electronics themselves.
These can be almost entirely avoided by using well-known technologies (multiwire techniques, extended Kreuzer circuits, shielding designs, modern TF measuring amplifiers).
The modulus of elasticity (manufacturer specification) exhibits an uncertainty (tolerance of the modulus of elasticity) which may be several percent. Accurately determining the modulus of elasticity in a suitable laboratory is costly and often cannot be implemented.
In experimental stress measurements, or as we sometimes refer to it as experimental stress analysis (ESA), the relative uncertainty of the modulus of elasticity produces a relative uncertainty in the mechanical stress of the same amount.
This means that if the material has a modulus of elasticity with a value known within an uncertainty of 5%, that alone produces an uncertainty of 5% in the stated mechanical stress.
The modulus of elasticity also depends on temperature as an influence quantity and the temperature coefficient (TC) of the modulus of elasticity (for steel ≈ -2 • 10^-4/K). The relative change in the modulus of elasticity is derived from the product:
This is equivalent to the additional uncertainty of the mechanical stress.
Example: If the modulus of elasticity of steel is given for a temperature of 23 °C and the measurement is performed at 33 °C, the modulus of elasticity drops by 0.2%. If this effect is not compensated for by computations, there will be a deviation of 0.2% in addition to the tolerance specified for the modulus of elasticity. Note that the TC of the modulus of elasticity is itself temperature-dependent, which means that this effect can never be entirely compensated for.
An important element of this measurement procedure is that the zero point for analyzing the measurement results is unnecessary. That is because only changes in the measured quantity are of interest and the zero point does not drift during the measurement (typical for relatively short measurement tests). Examples are crash tests, tensile tests and brief loading tests.
Material after-effects and strain gauge creep can be somewhat important in non-zero-point related measurements and are therefore covered in this section. On the other hand, phenomena such as thermal expansion, swelling of the adhesive, falling insulation resistance, temperature response of the strain gauge and strain gauge fatigue in non zero-point related measurements are almost completely irrelevant.
Of course, resistance will not drop so dramatically during a brief loading test of insulation resistance that failure of the measuring point would be possible.
Radius for measurement objects subject to bending loads (increase in strain)
If the strain gauge is located on a component that bends longitudinally to the measuring grid, the strain of the measuring grid deviates from the surface strain of the component (Fig. 2). The measured values obtained are too large. The smaller the radius of curvature and the greater the distance of the measuring grid from the component surface, the greater the effect.
If the strain gauge is located in the concave area, the measured values would also be too large simply in terms of the amount. The factor describing the measurement error would be the same. This also results in a multiplicative deviation relative to the measured value. The equation for calculating is:
For a medium distance of 100 μm from the measuring grid to the component surface and a bending radius of 100 mm, the resulting increase in strain is 1/1000 relative to the current strain value. The actual strain of the component in this example is 0.1% lower than the measured strain. That means that the stress is measured 0.1% too large. This measurement error is clearly only relevant for small bending radii.
In many materials, the strain still increases somewhat further after spontaneous mechanical loading. This phenomenon is largely complete after about 30 minutes (steel at 23 °C) and also occurs when the load is removed. The quotient of the amount of this additional strain and the spontaneous strain depends heavily on the material. Material after-effects thus produce an additional (positive) measurement error. This only occurs when acquiring strain values. This deviation can therefore be almost completely avoided in many measurement tasks.
However, if the measured value is acquired long after the load is applied, and the strain of the material has increased by 1% (relative to spontaneous strain), the result will be that the measured value for the material strain is 1% too large.
If the strain gauge is not exactly aligned in the direction of the material stress (uniaxial stress state), a negative measurement error is produced. The measured strain will then be less than the material strain. The relative strain error is determined as follows:
An alignment error of 5 degrees and a Poisson's ratio of 0.3 (steel) results in a strain error of -1%. Thus, the actual strain and the material strain are 1% greater.
After material strain is induced spontaneously, the measuring grid of the strain gauge creeps back somewhat. The process, determined primarily by the properties of the adhesive and the geometry of the strain gauge (short measuring grids are critical, strain gauges with very long reversing lengths do not creep), is also temperature-dependent. After return creep the strain of the grid is somewhat less than the material strain. The strain gauge often used in ESA (HBM type LY11-6/120 with an active measuring grid length of 6 mm) when used with adhesive Z70 (HBM) at a temperature of 23 °C has a return creep of about 0.1% within one hour. This is equivalent to a negative measurement error of -0.1% relative to the measured stress. Of course the deviation will be less if the measured value is determined immediately after spontaneous loading. Due to the negative sign, the strain gauge creep compensates at least partially for the elastic after-effects and may therefore often be completely ignored in ESA. However, advise caution when using other adhesives at higher temperatures. For example, adhesive X60 (HBM) applied at 70 °C with a strain of 2000 μm/m, the resulting deviation after just one hour is -5%.
The same applies to the hysteresis: short measuring grids tend to be critical and the adhesive has some effect. The hysteresis for strain gauge LY11-6/120 is only 0.1% with a strain of ±1000 μm/m if Z70 was used as the adhesive. It is therefore negligible.
If a very small strain gauge (LY11-0.6/120) with an active measuring grid length of 0.6 mm has to be used though, the hysteresis increases, and with it the uncertainty of the strain or stress measurement to 1%.
The Gauge Factor
It is assumed that the measurement chain is exactly adjusted to the nominal value of the gauge factor (as specified by the manufacturer on the strain gauge package). This factor describes the correlation between the change in strain and the change in relative resistance. It has been determined experimentally by the manufacturer. The uncertainty of the gauge factor is generally 1%. The gauge factor is also specified on the package. It produces the same relative degree of uncertainty in both strain and stress measurements.
The gauge factor is temperature-dependent. The sign and amount of the dependence are determined by the measuring grid alloy. The fact that the TC of the gauge factor is itself temperature-dependent can be ignored for purposes of ESA. The TC for a measuring grid made of Constantan is about 0.01% per Kelvin. Thus, the gauge factor decreases by 0.1% with a temperature increase of 10 K, which is generally negligible. If the measurements were performed at 33 °C, the strain or stress values would deviate upward by just 0.1%.
Although at 120 °C, it would be 1%, which is worth considering.
Measuring Grid Length
As generally understood, a strain gauge integrates the strains under its active surface. If the stress field under that surface is non-homogeneous, the relative change in resistance will not correspond to the greatest local strain, but rather to the average strain under the active measuring grid. This is fatal, because it is especially the greatest stresses that are of interest. The measured values therefore deviate downward from the desired maximum values, leading to negative deviations.
Since this phenomenon is well known, as are suitable countermeasures (short measuring grid), major errors seldom occur in practical applications. Nevertheless, let’s take an example: The measurement is applied to bending stress at the beginning of the beam. The strain gauge acquires the average strain under its measuring grid (Fig. 3). The strains behave like stresses:
The maximum stress value that is actually wanted could easily be determined in this simple case with a correction calculation. If this is not done, a deviation of the measurement result from the maximum stress will be produced.
Its relative deviation is:
If a measuring grid with an active length of less than 2% of l2 is used in the example above, the deviation drops to less than 1% of the measured value.
Ultimately the ratio of the maximum strain and the measured strain always depends on the distribution of strain under the measuring grid. If this is known from a Finite Element Calculation, the desired maximum value can be calculated from the arithmetic mean of the stress.
Of course, deviations will occur if the strain gauge is positioned incorrectly. This can also be largely avoided and it must be.
Strain gauges with suitable measuring grid materials (Constantan, Karma, Nichrome V, Platinum-tungsten) exhibit excellent linearity. Although for large strains, appreciable deviations can be demonstrated in Constantan measuring grids. The actual static characteristic curve can be very adequately described (empirically) with a quadratic equation:
If the strains were determined with the relationship
there would be no linearity deviations at all. However, as the quadratic component is simply neglected in practical applications, the resulting error should be indicated here. The relative deviation of the determined strain value from the true value is as large as the strain itself:
For strains up to 1000 μm/m, the value of the relative strain deviation does not exceed 0.1%. This is equivalent to 1 μm/m, which is negligible.
Linearity deviation only becomes appreciable at greater strains:
10,000 μm/m results in 1%
100,000 μm/m results in 10%
To a large extent, this is fortunately compensated for by the linearity deviation of the quarter bridge circuit.
Small relative changes in resistance are commonly analyzed with a Wheatstone bridge circuit. As noted above, only one strain gauge per measurement point is usually used in the ESA. Thus, the other bridge resistances are strain-independent. The correct relationship for the stress ratio in this case is:
Although the relationship is non-linear, linearity is assumed in practical measurement applications (whether or not this is known) and the approximation equation
is used. The relative deviation resulting from this simplification can be calculated with eq.
A strain of 1000 μm/m (with k = 2) results in a change of 0.2% in the relative resistance.
The relative measurement error as determined with eq. 17 is -0.1%. This is equivalent to an absolute deviation of -1 μm/m. The deviation from the true value is negligible.
Appreciable linearity deviations occur at greater strains however, as noted above:
10,000 μm/m results in a deviation of -1%,
100,000 μm/m results in a deviation of -9.1%.
When Constantan strain gauges are used (non-linearity similar in terms of magnitude, but with the opposite sign), the two deviations largely cancel each other out and therefore do not need to be considered any further.
Note however that no compensation is ever completely successful, especially given that the gauge factor deviates somewhat from 2 and the actual static characteristic curve does not exactly match the empirical eq. 12.
The individual uncertainties are difficult to correlate with each other. However, to the extent they can be (material after-effects and strain gauge creep, linearity deviation of the strain gauge and quarter bridge circuit), their effects cancel each other out to some extent. Therefore, it is permissible to combine the individual uncertainties with root sum square. The values in bold type above are used to achieve a result for the example.
The uncertainty of the strain measurement is just under 3%. The stress measurement reaches almost 6% of the measured value.
That percentage multiplied by the measured value gives the deviation in μm/m or N/mm2. The uncertainty of the modulus of elasticity is generally responsible for the largest amount of error in non-zero-point related measurements in ESA. Additional uncertainties must be considered for zero-point related measurements.
In these measurements, the zero point is important. These are typically long-term measurements on buildings and fatigue tests on components. If the zero point changes during measurement tasks of this type, the result is an additional measurement error. The measurement uncertainties already discussed in the last part of this series must be added to the ones noted in this section.
Thermal expansion of the DUT, temperature response of the strain gauge
The material being measured has a coefficient of thermal expansion. The thermal expansion will not be measured, as it is simply the result of temperature as an influence quantity. The measuring grid also has a coefficient of thermal expansion as well as a temperature coefficient of the specific electrical resistance. Since only strains induced by loading are of interest in ESA, the strain gauges that are offered are adapted to the thermal expansion of specific materials. However, all these temperature coefficients are themselves a function of the temperature so this compensation will not be entirely successful. The remaining deviation ΔƐ can be calculated with a polynomial. The coefficients of the polynomial are determined batch-specifically and are specified by the manufacturer on the strain gauge package.
An example of a strain gauge (HBM type LY-6/120) can be found here.
The current should be inserted in °C (but without dimensions). Then the remaining deviation (apparent strain) will be determined in μm/m. For a temperature of 30 °C, the resulting apparent strain is -4.4 μm/m.
If the ambient temperature deviates significantly more from the reference temperature (20 °C) or if the strain gauge is actually adjusted incorrectly, much greater deviations will occur. These are systemic in nature and can be eliminated by calculations (online as well). On the other hand, the equation already exhibits an uncertainty that increases by 0.3 μm/m per Kelvin of temperature difference from 20 °C. At a temperature of 30 °C, the uncertainty of the polynomial is 3 μm/m.
The only requirements for the correction calculation are to know the thermal expansion coefficient of the material and the ambient temperature.
This refers to the increase in temperature resulting from converted electrical power in the strain gauge. The heat output is determined as follows:
For a root mean square value of 5 V for the bridge excitation voltage and a 120 Ω strain gauge the resulting heat output is 52 mW. A strain gauge with a measuring grid length of 6 mm applied with a thin layer of adhesive on steel or aluminum is able to give off the heat sufficiently to the measurement object. A small temperature difference will nevertheless arise between the strain gauge and measurement object, which will lead to an apparent strain (see above):
If the temperature of the adjusted strain gauge is just one Kelvin above the material temperature, there is already an apparent strain of -11 μm/m (ferritic steel) or -23 μm/m (aluminum). The measurement uncertainty can be roughly determined with a simple experiment - the excitation voltage is connected while the load is not applied to the component. In the temperature increase phase, the measured value will drift slightly (zero drift). The greatest difference between measured values during this thermal compensating process corresponds roughly to the maximum expected deviation.
Lower excitation voltages provide a remedy (1 V generates only 2 mW). Strain gauges with higher resistances are also advantageous in this respect.
For components with poor heat conductance (plastics, etc.) and when very small strain gauges are used, lowering the excitation voltage is indispensable. Caution is always advised when working with rapidly changing temperatures. Compensation effects resulting from adjusting the metal foil of the strain gauge to the material being examined have a time constant.
The main cause of this is the high mobility of water molecules and the hygroscopic properties of the adhesives and carrier materials. The effect is a zero drift that is not clearly discernible (or distinguishable from the material strains). It may take on high values. A strain is measured which does not exist, at least in the component being examined. This parasitic strain is only partially reversible. Unfortunately there is no way to “grab a hair dryer” and drive out the water molecules. The speed at which the measured value drifts depends on the measuring point protection and ambient conditions. The time constant may be in the range of many hours. A high temperature and a high relative humidity are especially critical. Unfortunately no concrete formulas or figures can be given here.
Residue of flux material can also absorb water molecules. This appears in practical applications as a “breathing display” which is often discernible in fluctuating measured values due to a draft or similar cause. Experienced testers will recognize the warning and meticulously clean all contact points. “Baking out” the residue is also possible in some circumstances. However, all these countermeasures require that the moist parts are not already enclosed under the protective cover of the measuring point, which they often are for good reason. It has proven practical when the measuring point is prepared for covering, to heat it a few degrees Kelvin compared to the prevailing ambient temperature, and then cover it immediately. This will exclude the possibility of condensation forming later under the cover. If the insulation resistances are too low, zero drift of measured values will occur. The insulation resistances within the bridge circuit are extremely critical in this case. Faulty electrical insulation of strain gauge contacts between each other is comparable to a resistance shunt. It cannot be measured directly, but due to its nature, is similar in magnitude to the insulation resistance. The correlation between the apparent strain and shunt is as follows:
This eq. shows that the effect is lower with high-resistance strain gauges. The following measurement errors are determined for 120 Ω strain gauges (gauge factor = 2):
Under “normal” circumstances, insulation resistances greater than 50 MΩ can be achieved and the deviations of less than 1.2 μm/m are negligible.
At 500 kΩ and with a measured value of 1000 μm/m. the zero error would already be -12%! This shows clearly that a significant drop in insulation resistances could cause the measuring point to fail. Strain gauge transducers have insulation resistances of several GΩ.
A high relative humidity with high temperature at the same time (such as saturated vapor) is critical because it leads to a high water vapor pressure. The tiny water molecules push forward and gradually overcome the measuring point protection. It is impossible to predict without a test whether the measuring point will fail after just a few days or several years.
Signs of fatigue in the strain gauge measuring grid appear during dynamic loading of the component that are expressed in a zero drift (apparent strain in the material). The greater the alternating strain amplitude and the greater the number of load cycles, the greater the effect (Fig. 5).
The installation and the arithmetic mean of the strain also affect the zero drift. If the average is negative, the fatigue life improves. If the value is positive, it deteriorates. Practically no zero drift may be expected for alternating strains with an amplitude up to 1000 μm/m. Greater amplitudes are more critical. A zero error of 10 μm/m may be expected for:
1500 μm/m and approx. 2 mil. load cycles
2000 μm/m and approx. 100,000 load cycles
2500 μm/m and approx. 4000 load cycles
3000 μm/m and approx. 100 load cycles
Note that the test specimen also undergoes fatigue. If its resistance to alternating loads is greater than that of the foil strain gauge, use of optical strain gauges should be considered (fiber Bragg grating).
Find out more about the fatigue life of strain gauges
While the deviations in part 3 of this series are multiplicative in effect and are indicated as a percentage of the measured value, the deviations in this section have an additive effect. The unit of measurement is μm/m and they are practically independent of the measured value. If the relative deviation is calculated with eq.,
the value is comparable to those in part 3.
If the values in bold type above are combined using Pythagorean addition, the result is 16.01 μm/m. Since measurement uncertainties should not be rounded, the uncertainty for the zero point is 17 μm/m. With a strain of 1000 μm/m, the deviation expressed as a percentage is 1.7%, which is certainly reasonable. It is clearly critical with small strains: 17 μm/m of 100 μm/m is already 17%.
Now the uncertainty of the zero point (1.7% or 17%) must still be added to the uncertainty from part 3 (3% for the strain measurement).
The result of Pythagorean addition is:
4% with a measured value of 1000 μm/m,
18% with a measured value of 100 μm/m.
Usually the mechanical stress is the actual measured quantity so its uncertainty must be estimated. The uncertainty of the stress measurement calculated in part 3 is 6%. Including the uncertainty of the zero point (1.7% or 17%) with Pythagorean addition, the result is:
7% with a strain of 1000 μm/m,
19% with a strain of 100 μm/m.
Large relative measurement errors occur with zero-point related measurement tasks, especially with small strains.
Installed Strain gauges
The effect of the installer
It has been assumed so far that the installation of the strain gauge measurement point was well planned and conscientiously executed. For this reason, only a few of the individual deviations in the examples above exceeded the set range. Although it is unfortunately necessary to point out that if the installation is performed very poorly, the measurement errors could assume arbitrarily large values. Imagine for a moment that a very long strain gauge was used to try to measure a notch stress, or that contact resistances to the strain gauge fluctuate by 0.24 Ω (equivalent to a strain error of 1000 μm/m for a 120 Ω strain gauge).
Especially in zero-point related measurements over long periods of time, the importance of measuring point protection cannot be overestimated. An excellent example is the 44 strain gauge measuring points on the FINO 1 research platform (overall height 129 m) in the North Sea (45 km North of Borkum Island). The strain gauges are located 5 to 25 m below the surface of the ocean. Their task was to measure loading strains on the support frame of the platform caused by pile drivers and by waves and wind. After two years in North Sea water, 42 measuring points were still fully functional.
Another gross error is if the strain gauge has only a partial internal connection with the surface of the component being examined. Causes may include: poor cleaning or improper handling of the application surface and superimposed adhesive. These causes must and can be avoided. The rubber eraser test generally clarifies the situation. Although it may be possible to dispense with measuring point protection for a short-term measurement (tensile test), installation of strain gauges requires a conscientious approach and frequently a good measure of experience. There is probably no other method of measurement in which the knowledge and experience of the person performing the task play such an important role. This is why companies and institutes are more and more frequently taking advantage of the possibility of certifying their personnel according to VDI/VDE/GESA 2636 on various qualifying levels. |
Purpose of the AssessmentPurpose of the Assessment
The purpose of this assignment is to test communication skills together with the ability to analyse, to interpret and to report. This assignment will also encourage research and investigation.
The case study consists of two sections- Section A consists of 70 marks and Section B consist of 30 marks. Students are expected to read the case study thoroughly and to answer all the required questions in a structured and organised manner with reference to published work. This is an individual assignment and it is worth 60% of the total module mark.
Academic Honesty: Plagiarism will not be tolerated and could lead to your failure, so please make sure you cite and reference correctly. Use proper Harvard referencing and citation use website like: https://www.refme.com/uk/referencing-generator/harvard/
Submission deadline: Your answers should be submitted through Turnitin on NILE no later than1st July 2016, at 11:59 PM (UK time). Please keep in mind that late submissions will not be allowed. If you have any mitigating circumstances that hinder your ability to submit on due date, please inform your module leader before the submission date. To learn more about the University’s mitigating circumstances policy please see the link below: http://tundra.northampton.ac.uk/results/searchresult.asp?Title=mitigating+circumstances&Description=&Author=&Department=&Date+Created=&Until+Date+Created=&Document+Type=&Perspectives=&submit=Search
Feedback: There will be a written feedback four weeks after the deadline for submission. You will be informed through NILE when the feedback is ready.
Question 1Housebuilding plays an important role in the UK economy.The following financial datais for twomajor UK housebuilding companies thatare listed on the London Stock Exchange.
1. Barratt development PlcYou will find non-financial ratios in their website and then compare with other company.http://www.barrattdevelopments.co.uk/sustainability/our-reports
Barratt Development PLc Annual Ratios [GBP Millions] 30-Jun-2015 30-Jun-2014 30-Jun-2013 30-Jun-2012 30-Jun-2011Financial Strength Current Ratio 3.33 3.36 3.02 3.38 3.30Quick/Acid Test Ratio 0.37 0.32 0.30 0.19 0.12Working Capital 3,290,600.0 2,735,800.0 2,410,300.0 2,413,600.0 2,393,000.0Long Term Debt/Equity 0.04 0.05 0.05 0.11 0.14Total Debt/Equity 0.05 0.06 0.11 0.12 0.14Long Term Debt/Total Capital 0.04 0.05 0.05 0.10 0.12Total Debt/Total Capital 0.05 0.06 0.10 0.10 0.12Payout Ratio 33.17% 33.02% 32.59% 0.00% 0.00%Effective Tax Rate 20.37% 21.81% 28.52% 32.60% -Total Capital 3,878,900.0 3,546,100.0 3,420,000.0 3,315,800.0 3,335,900.0 Efficiency Asset Turnover 0.68 0.63 0.54 0.49 0.41Inventory Turnover 0.79 0.78 0.70 0.62 0.54Days In Inventory 460.40 466.63 522.76 587.27 670.30Receivables Turnover 30.54 37.74 47.09 46.51 36.02Days Receivables Outstanding 11.95 9.67 7.75 7.85 10.13Revenue/Employee 629,627 548,566 521,240 516,311 508,850Operating Income/Employee 96,600 71,208 33,040 40,089 20,200EBITDA/Employee 97,153 71,555 33,360 40,444 20,650 Profitability Gross Margin 19.00% 16.77% 13.78% 12.75% 11.19%Operating Margin 15.34% 12.98% 6.34% 7.76% 3.97%EBITDA Margin 15.43% 13.04% 6.40% 7.83% 4.06%EBIT Margin 15.34% 12.98% 6.34% 7.76% 3.97%Pretax Margin 15.04% 12.37% 4.01% 4.30% -0.57%Net Profit Margin 11.95% 9.67% 2.87% 2.90% -0.68%COGS/Revenue 81.00% 83.23% 86.22% 87.25% 88.81%SG&A Expense/Revenue 3.66% 3.79% 4.09% 4.52% 4.56% Management Effectiveness Return on Assets 8.15% 6.11% 1.56% 1.41% -0.27%Return on Equity 12.75% 9.52% 2.47% 2.28% -0.47%
2. Persimmon HomesYou have to choose from non-financial ration from their website: for example,http://corporate.persimmonhomes.com/investors/our-business/key-performance-indicatorsNon-financial ratio like:1.Waste generated per home sold and % recycled2.The Reportable Injuries Disease and Dangerous Occurrences Regulations
Persimmon Homes plc Annual Ratios [GBP Millions] 31-Dec-2014 31-Dec-2013 31-Dec-2012 31-Dec-2011 31-Dec-2010Financial Strength Current Ratio 3.42 3.35 3.84 3.74 3.88Quick/Acid Test Ratio 0.53 0.38 0.40 0.15 0.29Working Capital 2,017,500.0 1,742,800.0 1,701,400.0 1,538,000.0 1,677,000.0Long Term Debt/Equity (This three ratios almost means the same thing choose just one of them, otherwise you will be repeating) 0.00 0.00 0.00 0.00 0.09Total Debt/Equity 0.00 0.00 0.00 0.00 0.12Long Term Debt/Total Capital 0.00 0.00 0.00 0.00 0.08Total Debt/Total Capital 0.00 0.00 0.00 0.00 0.11Payout Ratio 0.00% 0.00% 0.00% 27.64% 19.58%Effective Tax Rate 20.34% 23.70% 23.24% 25.95% 25.08%Total Capital 2,192,600.0 2,045,500.0 1,993,700.0 1,839,400.0 1,949,900.0 Efficiency Asset Turnover 0.81 0.72 0.64 0.58 0.57Inventory Turnover
(This three ratios almost means the same thing choose just one of them, otherwise you will be repeating) 0.87 0.78 0.70 0.64 0.65Days In Inventory 419.59 465.50 521.26 567.06 565.67Receivables Turnover 38.62 37.75 43.97 35.86 36.33Days Receivables Outstanding 9.45 9.67 8.30 10.18 10.05Revenue/Employee 745,410 747,367 684,453 631,168 650,166Operating Income/Employee 134,752 122,035 87,157 66,571 84,631EBITDA/Employee 136,548 123,647 88,787 68,257 86,620 Profitability Gross Margin 22.22% 20.20% 17.54% 14.53% 12.41%Operating Margin 18.08% 16.33% 12.73% 10.55% 13.02%EBITDA Margin 18.32% 16.54% 12.97% 10.81% 13.32%EBIT Margin 18.08% 16.33% 12.73% 10.55% 13.02%Pretax Margin 18.14% 16.16% 12.68% 9.59% 9.81%Net Profit Margin 14.45% 12.33% 9.73% 7.10% 7.35%COGS/Revenue 77.78% 79.80% 82.46% 85.47% 87.59% Management Effectiveness Return on Assets 11.66% 8.84% 6.24% 4.11% 4.18%Return on Equity 17.56% 12.74% 8.74% 6.08% 6.85%
You are required to:
a) Selection and justification of at least 10 financial ratios from each company (the same from both) and calculate 2 non-financial ratios to analyse the performance and financial position and investment potential of the two companies. You are expected to use charts to compare performance of the two companies. You will need to look at the audited financial statement and carry out further research to explain the performance of the company over the five years. Clear ranking is expected for each ratio (ratios which are in red don’t choose all of the just one from each, otherwise you will be repeating. Prepare Charts and graphs for each ratio, financial strength, efficiency, gearing, profitability, management effectiveness, which doing well. For each ratio you need example like; in company one gearing is less which means that the company which has less gearing is better, because the having less loan, paying less interest and it’s better for shareholders. Analyse the numbers, you don’t need to apply the formula, you have to understand what is ratio about and analyse, summarize each ratio briefly.You don’t need to calculate the financial ratios they been given already, just non-financial for both companies.Create a graph for each ratio from company 1 and 2 and under analyse, where it’s decries or increase in each year and why, ranking decision? Why non-financial ratios are important, because financial ratios don’t give bigger picture.(40 Marks)
b) Summarise the ranking and create a consolidated ranking template. You can use ratio averages for the past five years. You need to decide on the weighting of each ratio in your overall rating. Discuss the reasons for the variations in performance over the past five years. (10 Marks)
c) Write a memo to the managing director of the number two (poor performing) company with recommendations of how the financial performance of the business can be improved. (10 marks)
d) Discuss the limitations of only using ratio analysisas a tool to interpret companyperformance? Limitation, financial ratios can be manipulating the gap of 6 months between two companies, have freedom to choose the dates, financial ratios can be narrow, not everything give a clear picture from it. (10 Marks).
You are expected to research for more information on the companies and cite the material correctly. You can use the Global Business Browser database to access analysts’ and SWOT reports. Question2: Investment Appraisal
Your company has the option to invest in either projects T14 and R26 but finance is onlyavailable to invest in one of them.You are given the following projected data:
Project T14 R26Initial Cost (70,000) (60,000)Year 1 15,000 20,000Year 2 18,000 25,000Year 3 20,000 (50,000)Year 4 32,000 10,000Year 5 18,000 3,000Year 6 – 2,000
You are told:
(1) All cash flows take place at the end of the year apart from the original investment inthe project which takes place at the beginning of the project.(2) Project T14 machinery is to be disposed of at the end of year 5 with a scrap value of£10 000.(3) Project R26 machinery is to be disposed of at the end of the year 3 with a nil scrapvalue and replaced with new project machinery that will cost £75000(4) The cost of this additional machinery has been deducted in arriving at the profitprojections for R26 for year 3. It is projected that it will last for three years and have a nilscrap value.(5) The company’s policy is to depreciate its assets on a straight line basis.(6) The discount rate to be used by the company is 14%.
Required:1) Calculate the annual cash flows for each of the projects. (5 Marks)
2) Using the investment appraisal methods calculate the ARR, Payback and NPV. (15 Marks)
3) Advise the company on whether this project should be accepted or rejected based on methods used above. Provide a clear explanation of what the calculations mean for the company (10 Marks)
ADDITIONAL GUIDANCE1. All calculations must be detailed and presented clearly.2. Use of published work (citing references) within text is expected.3. A full list of references should be presented at the end of the case study.4. Please avoid the use of ‘I, We, Us’ in your case study. You are expected to write in third person.5. Include the assignment front sheet which is attached to the assignment brief.6. Your answer should not repeat the question as it will be included in your word count.7. Formatting:a. Font Type: Arial. b. Font size 11/12. c. Line spacing 1.5 to 2.d. All pages must be numberede. All graphs, charts and tables should have a number and a title.f. All text must be aligned to the left.8. Good use of English, referencing, presentation will earn marks.9. Submit online and on time, late submissions will not be accepted.10. For extensions or deferral of assessment, please refer to the University policy on mitigating circumstances.
Accounting and Finance Penalties1. Word Count*: All assessments have a word count with a tolerance of 10% only. Submissions that exceed the word count will be penalised as follows-one grade point* for every 150 words or part thereof.2. Missing References – penalty is three grade points minimum (see module guide for further details).3. Front sheet missing-penalty one grade point.4. Word count missing or inaccurate-penalty one grade point. ** Front sheet, contents page, references and any appendices do not count in the word count. Accounting & Finance Front SheetACC3015NB. This sheet must be attached to any submission of Accounting & Finance field module coursework ONLINE. No assignment will be accepted without it.Student IDs _________________________________________ Student Name Title of Coursework ACC3015 Case Study Marking Tutor Hand in Date 1st July 2016
Checklist before submission
1. Have you read, understood and acted in accordance with the referencing guidelines set out in the appropriate Accounting & Finance Module Guide.2. Where you have quoted directly from or where you have paraphrased the work of others, have you acknowledged and appropriately referenced the source of your quotation in the body of the text?3. Have you placed all direct quotations in inverted commas?4. Have you listed and correctly cited all your sources in your bibliography?Declaration by the candidate named above1. I confirm that this is my own work (or, in the case of a group assignment, the work of my group) and that, although I may have consulted others in the course of assembling material for the work, the finished article has been completed without help or participation of any other person (other than, in group assignments, other members of the same group).2. The work contains no material drawn from unattributed sources.
Student Signature ________________________
Date Signed ________________________ |
Presentation on theme: "Introduction to Statistics"— Presentation transcript:
1 Introduction to Statistics Chapter 1Introduction to Statistics1-1 Overview1- 2 Types of Data1- 3 Abuses of Statistics1- 4 Design of Experiments
2 Statistics (Definition) OverviewStatistics (Definition)A collection of methods for planning experiments, obtaining data, and then organizing, summarizing, presenting, analyzing, interpreting, and drawing conclusions based on the data
3 Definitions Population Sample The complete collection of all data to be studied.SampleThe subcollection data drawn from the population.
4 Example Identify the population and sample in the study A quality-control manager randomly selects 50 bottles of Coca-Cola to assess the calibration of the filing machine.Emphasize that a population is determined by the researcher, and a sample is a subcollection of that pre-determined group. For example, if I collect the ages from a section of elementary statistics students, that data would be a sample if I am interested in studying ages of all elementary statistics students. However, if I am studying only the ages of the specific section of elementary statistics, the data would be a population.
5 Definitions Statistics Descriptive Statistics Broken into 2 areas Inferencial Statistics
6 Definitions Descriptive Statistics Inferencial Statistics Describes data usually through the use of graphs, charts and pictures. Simple calculations like mean, range, mode, etc., may also be used.Inferencial StatisticsUses sample data to make inferences (drawconclusions) about an entire populationEmphasize that a population is determined by the researcher, and a sample is a subcollection of that pre-determined group. For example, if I collect the ages from a section of elementary statistics students, that data would be a sample if I am interested in studying ages of all elementary statistics students. However, if I am studying only the ages of the specific section of elementary statistics, the data would be a population.Test Question
7 1-2 Types of Data Parameter vs. Statistic Quantitative Data vs. Qualitative DataDiscrete Data vs. Continuous Data
8 Definitions Parameter population parameter a numerical measurement describing some characteristic of a populationpopulationparameter
9 Definitions Statistic sample statistic a numerical measurement describing some characteristic of a samplesamplestatistic
10 Examples Parameter Statistic 51% of the entire population of the US is FemaleStatisticBased on a sample from the US population is was determined that 35% consider themselves overweight.
11 Definitions Quantitative data Numbers representing counts or measurementsQualitative (or categorical or attribute) dataCan be separated into different categories that are distinguished by some nonnumeric characteristics
12 Examples Quantitative data The number of FLC students with blue eyesQualitative (or categorical or attribute) dataThe eye color of FLC students
13 DefinitionsWe further describe quantitative data by distinguishing between discrete and continuous dataDiscreteQuantitative DataContinuous
14 Definitions Discrete Continuous data result when the number of possible values is either a finite number or a ‘countable’ number of possible values0, 1, 2, 3, . . .Continuous(numerical) data result from infinitely many possible values that correspond to some continuous scale or interval that covers a range of values without gaps, interruptions, or jumpsUnderstanding the difference between discrete versus continuous data will be important in Chapters 4 and 5.When measuring data that is continuous, the result will be only as precise as the measuring device being used to measure.23
15 ExamplesDiscreteThe number of eggs that hens lay; for example, 3 eggs a day.ContinuousThe amounts of milk that cows produce; for example, gallons a day.
16 Definitions Univariate Data Bivariate Data Involves the use of one variable (X)Does not deal with causes and relationshipBivariate DataInvolves the use of two variables (X and Y)Deals with causes and relationshipsUnderstanding the difference between discrete versus continuous data will be important in Chapters 4 and 5.When measuring data that is continuous, the result will be only as precise as the measuring device being used to measure.
17 Example Univariate Data Bivariate Data How many first year students attend FLC?Bivariate DataIs there a relationship between then number of females in Computer Programming and their scores in Mathematics?Understanding the difference between discrete versus continuous data will be important in Chapters 4 and 5.When measuring data that is continuous, the result will be only as precise as the measuring device being used to measure.
18 Important Characteristics of Data 1. Center: A representative or average value that indicates where the middle of the data set is located2. Variation: A measure of the amount that the values vary among themselves or how data is dispersed3. Distribution: The nature or shape of the distribution of data (such as bell-shaped, uniform, or skewed)4. Outliers: Sample values that lie very far away from the vast majority of other sample values5. Time: Changing characteristics of the data over timeMost important characteristics necessary to describe, explore, and compare data sets.page 34 of text
19 Uses of StatisticsAlmost all fields of study benefit from the application of statistical methodsSociology, Genetics, Insurance, Biology, Polling, Retirement Planning, automobile fatality rates, and many more too numerous to mention.page 11 of text
20 1-3 Abuses of Statistics Bad Samples Small Samples Loaded Questions Misleading GraphsPictographsPrecise NumbersDistorted PercentagesPartial PicturesDeliberate Distortions
21 Abuses of Statistics Bad Samples Inappropriate methods to collect data. BIAS (on test) Example: using phone books to sample data.Small Samples (will have example on exam)We will talk about same size later in the course. Even large samples can be bad samples.Loaded QuestionsSurvey questions can be worked to elicit a desired response
22 Abuses of Statistics Bad Samples Small Samples Loaded Questions Misleading GraphsPictographsPrecise NumbersDistorted PercentagesPartial PicturesDeliberate Distortions
23 Salaries of People with Bachelor’s Degrees and with High School Diplomas $40,500$40,500$40,000$40,00035,00030,000$24,40030,00020,000$24,400page 11 of textGraphs whose vertical scales do not start at 0 will give a misleading representation of the differences in heights of the bars.25,00010,00020,000Bachelor High SchoolDegree DiplomaBachelor High SchoolDegree Diploma(a)(test question)(b)
24 We should analyze the numerical information given in the graph instead of being mislead by its general shape.
25 Abuses of Statistics Bad Samples Small Samples Loaded Questions Misleading GraphsPictographsPrecise NumbersDistorted PercentagesPartial PicturesDeliberate Distortions
26 Double the length, width, and height of a cube, and the volume increases by a factor of eight What is actually intended here? 2 times or 8 times?page 14 of text
27 Abuses of Statistics Bad Samples Small Samples Loaded Questions Misleading GraphsPictographsPrecise NumbersDistorted PercentagesPartial PicturesDeliberate Distortions
28 Abuses of Statistics Precise Numbers There are 103,215,027 households in the US. This is actually an estimate and it would be best to say there are about 103 million households.Distorted Percentages100% improvement doesn’t mean perfect.Deliberate DistortionsLies, Lies, all Lies
29 Abuses of Statistics Bad Samples Small Samples Loaded Questions Misleading GraphsPictographsPrecise NumbersDistorted PercentagesPartial PicturesDeliberate Distortions
30 Abuses of Statistics Partial Pictures “Ninety percent of all our cars sold in this country in the last 10 years are still on the road.”Problem: What if the 90% were sold in the last 3 years?
32 Definition Experiment Event apply some treatment (Action) observe its effects on the subject(s) (Observe)Example: Experiment: Toss a coinEvent: Observe a tail
33 Designing an Experiment Identify your objectiveCollect sample dataUse a random procedure that avoids biasAnalyze the data and form conclusions
34 Methods of Sampling Random (type discussed in this class) Systematic ConvenienceStratifiedClusterreview of the 5 different types of sampling
35 Definitions Random Sample Simple Random Sample (of size n) members of the population are selected in such a way that each has an equal chance of being selected (if not then sample is biased)Simple Random Sample (of size n)subjects selected in such a way that every possible sample of size n has the same chance of being chosen
36 Random Sampling - selection so that each has an equal chance of being selected page 19 of text
37 Systematic Sampling Select some starting point and then select every K th element in the population
38 use results that are easy to get Convenience Samplinguse results that are easy to get
39 subdivide the population into at Stratified Samplingsubdivide the population into atleast two different subgroups that share the same characteristics, then draw a sample from each subgroup (or stratum)
40 Cluster Sampling - divide the population into sections (or clusters); randomly select some of those clusters; choose all members from selected clustersStudents will most often confuse stratified sampling with cluster sampling. Both break the population into strata or sections. With stratified a few are selected from each strata. With cluster, choose a few of the strata and choose all the member from the chosen strata.
41 Definitions Sampling Error Nonsampling Error the difference between a sample result and the true population result; such an error results from chance sample fluctuations.Nonsampling Errorsample data that are incorrectly collected, recorded, or analyzed (such as by selecting a biased sample, using a defective instrument, or copying the data incorrectly).page 23 of text
42 Using Formulas Factorial Notation Order of Operations 8! = 8x7x6x5x4x3x2x1Order of Operations( )POWERSMULT. & DIV.ADD & SUBT.READ LIKE A BOOKKeep number in calculator as long a possiblepage 23 of text
Your consent to our cookies if you continue to use this website. |
Introduction to Probability and Statistics
Introduction to Probability and Statistics STAT 20
Popular in Course
Popular in Statistics
This 54 page Class Notes was uploaded by Floy Kub on Thursday October 22, 2015. The Class Notes belongs to STAT 20 at University of California - Berkeley taught by Staff in Fall. Since its upload, it has received 9 views. For similar materials see /class/226733/stat-20-university-of-california-berkeley in Statistics at University of California - Berkeley.
Reviews for Introduction to Probability and Statistics
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 10/22/15
Probability Models General Probability Rules 0 Coin tossing 0 Probability models 7 Sample spaces and events 7 Venn diagrams 7 Basic probability rules 7 Assigning probabilities a nite sample space 7 Assigning probabilities intervals of outcomes 7 Independence and the multiplication rule Randomness and Probability Recall We call a phenomenon random if individual outcomes are uncertain but there is a regular distribution of outcomes in a large number of repetitions The probability of any outcome of a random phenomenon if the proportion of times the outcome would occur in a very long series of repetitions Example COln tossing Fiench natuialist Buffon obseived 2043 heads in 4040 tosses 7 2043 7 Relative fiequency 4040 a 0 5069 English statistician Kail Peaison obseived 12012 heads in 24000 tosses Relative frequency 3353 0 5005 English mathematician John Kexxlch observed 5057 heads in 10000 tosses Relative frequency 5067 7 0 5067 10000 T as mo na quotimimusse mo na I38 u 2m mu am am mun quotimimusse 2 Probability Models The sample space7 denoted by S is the set of all possible outcomes of a random phenomenon o Toss a coin and iecoid the side facing up Then s HeadTail H T o Toss a coin tWice Record the side facing up each time Then s 7 o Toss a com twlce Record the number of heads m the two tosses Then S 7 An event is an outcome or a set of outcomes of a random phenomenon ie a subset of the sample space Toss a coin three times Then s HHH HHT HTHTHH HTTTHTTTHTTT 0 Let A be the event that we get exactly two tails Then A 7 0 Let B be the event that we get at least one head Then B A probability model is a mathematical description of a random phenomenon consisting of two parts a sample space S and a way of assigning probabilities to events The probability of an event A7 denoted by PA7 can be considered the long run relative frequency of the event A Set Notation Suppose A and B are events in the sample space S Then 0 AUBEAorBE the set of all outcomes in A or in B or in both 0 A B E A and B E the set of all outcomes that are in A AND in B o A O B 0 E A and B are disjoint E A and B are mutually exclusive E A and B have no outcomes in common 0 AC E the complement of A E the event that A does not occur Example be the event that we get 2 heads B the event C the event that we get at least one head Toss a com twice Let A that we get exactly 1 tall and So A HH B TH HT C HH HT TT 0 A 7 o AUB o B 7 o A D oA B OBUD Probabilities in a Finite Sample Space If the sample space is nite each distinct event is assigned a probability The probability of an event is the sum of the probabilities of the distinct outcomes making up the event If a random phenomenon has k equally likely outcomes each individual outcome has probability For any event A 7 number of outcomes in A P A 7 number of outcomes in S Rules of Probability 1 For any event A 0 PA 1 2 135 1 3 For any event A PA 17 PA F If PA O B Q then PA U B PA PB More generally PA o B PA PB e PA o B Example Roll a fair die and looking at the face value Sample space S 1 234 56 This is a nite sample space and each outcome is equally likely That is PX j 16 Vj E S where X is the face value of the die after rolling PXZ5PX5PX6161613 PX 2 7 mm Probabxhues Inter3 s of Outcomes A mama mam mmba 5mm 5 dSuned m o mm a m marinas inme bet mam swap saw David Shilane UC Berkeley The Accuracy of Percentages David Shilane Lecture 177 Statistics 20 University of California Berkeley Tuesday7 April 10th7 2007 April 10th7 2007 Statistics Pace 1 David Shilane UC1 Berkeley We re often interested in estimating a percentage Some examples include 0 Baseball batting averages o The proportion of customers who buy an item during a sale 0 The risk of obtaining a disease 0 Political approval rates The usual statistical technique we use to estimate percentages is to sample data and calculate the proportion of outcomes we re interested in However because data are random and we can usually only collect a small amount of it the question is how accurate are our estimates April 10th7 2007 Statistics Page 392 David Shilane U C Berkeley A Motivating Example Figure 1 Tarja Halonen President of Finland April 1013117 2007 Statistics Page 3 David Shilane UC Berkeley Figure 2 Tarja Halonen With her dappelganger Conan O Brien April 10th7 2007 Statistics Page 4 David Shilane UC1 Berkeley Estimating Tarja s Approval Rating We can de ne a politician s approval rating to be the proportion of constituents who approve of the politician Though of another way the approval rating is the probability that a randomly selected constituent will approve of the politician Sometimes approval is measured in different ways i it can be the proportion of people who intend to vote for the politician who approve of actions relating to an issue or even just whether people like the person or not Unfortunately people respond differently depending upon what question is asked and even how it s delivered Therefore it is important to remember that the results obtained from a poll are with respect to a particular question and we should be hesitant to generalize results for one question to answer another April 10th 2007 Statistics Pace David Shilane UC1 Berkeley Types of Surveys Do you approve of Tarja Halonen Yes No How strongly do you approve of Tarja Halonen 1 2 3 4 5 6 For the latter survey we might say people approve of Tarja if they responded With at least 4 and otherwise disapprove April 10th7 2007 Statistics Pace 53 David Sliilane UC1 Berkeley ThelData In the simplest case we would draw n names out of a hat with replacement and ask them to complete the survey Then our data is 1 i n 7 X 1 if person 239 approves Z 0 if person 239 disapproves Our quantity of interest is the approval rating PX7 1 p where p is an unknown number we wish to estimate We can do so by taking the empirical proportion 13 of people who approve which is also the sample mean of n Binomial1 p random variables pX 3H i a April 10th7 2007 Statistics Page 7 David Sliilane UC1 Berkeley Is it really with replacement The short answer is no In reality people are selected without replacement and this means we have to use more complicated methods 7 there are entire courses devoted to designing and analyzing surveys Another potential problem is that it may be dif cult to select people with equal probability There are do not call lists and of course some people don t have telephones Furthermore not everyone you call necessarily responds and this may lead to a selection bias in your results For all these reasons opinion polling is rarely as simple as the survey we re describing However at the end of the day you still have to compute an estimate and assess its accuracy so what we would do in the simple situation provides some building blocks for the more dif cult problems April 10th 2007 Statistics Page 8 David Shilane UC1 Berkeley But what are we estimating The underlying assumption here is that Tarja has a true approval rating p PX 1 EX because X is Binomial1p This is equivalent to the approval rating we would obtain from surveying all N 5 million people in Finland This is not only impractical but also very costly in terms of time and labor so the best we can do is sample as many people as we can If each person approves with probability p and disapproves with probability 1 p then sampling with replacement is like ipping a weighted coin n times April 10th7 2007 Statistics Page 9 U C Berkeley David Sliilane Expected Value and Variance for a single coin ip o EX 1p01 p p VarX EX2 EX2 1209 021 p P2 19 192 19 19 SDX xVa X 291 29 April 10th7 2007 Statistics Page 10 David Sliilane UC1 Berkeley Expected Value and Variance of the Sample Mean 0 EX E Z1X EX1Xn Z1ElX llPwpl 3 II 39B o VamQ Q Van 231 Xi 231 VamX7 lp1 pp1 pl p1 p W 51902 W W Note E 2321 Xi 231 always but Van 271 Xi 231 VamX only when X1 Xn are uncorrelated which they are by independence April 10th7 2007 Statistics Page 11 David Shilane U C Berkeley Mean and Variance for Tarja s Estimated Approval Rating A 1 0 Mean p X 5 231 Xi o Variance pan p 151 15 0 SD n April 10th7 2007 Statistics Page 1392 David Sliilane UC1 Berkeley The Law of Large Numbers and Central Limit Theorem 0 Because E p and SD03 V w the law of large numbers says that the sample mean will get closer and closer to the true mean as n grows larger This is true because as n gtoo SD03 a0 sop gt19 o The central limit theorem says that 13 will have approximately a Normal distribution as n grows large because it is a mean of n independent identically distributed random variables Therefore when we collect a large number of surveys we can use the Normal distribution to make probability statements about the results April 10th 2007 Statistics Page 13 David Shilane UC1 Berkeley Con dence Intervals We are often interested in determining a range of values that cover a certain proportion of the data This range is called a con dence interval and is speci ed by the proportion you ask for It is very common to use a 95 con dence interval but the number itself is somewhat arbitrary To determine a con dence interval we need to convert to standard units z 31340 We start by nding the value of z in the Normal table that speci es the desired proportion When we want to cover 95 of the area under the Normal curve we use z z 196 Sometimes we refer to this value as 2095 196 to indicate that it covers 95 of the area April 10th7 2007 Statistics Page 14 David Sliilane UC1 Berkeley Finding an Interval In order to nd a 95 con dence interval we need to backsolve the standard units equation 2095 gt 2095SDX X X gt Xm39ght This gives the right endpoint Then we just plug in z095 to nd the left endpoint Xleft X 2095SDX Then for any Normal random variable X a 95 con dence interval is given by XleftyXTight X l zOQBSDX April 10th7 2007 Statistics Page 15 David Sliilane UC1 Berkeley A 95 Con dence Interval for the Sample Mean Remember that E p p m 13 We would prefer to ll in the true expected value but since we don t know 19 the best we can do is ll in our estimate 13 in its place Likewise SD03 MPG 719 MW Therefore we plug in these values for the mean and SD to nd a 95 con dence interval for the sample mean as XleftmeLght 23 i Z095 pan p X i196v X1n X April 10th7 2007 Statistics Page 16 David Sliilane UC1 Berkeley Our Old Friend the Box Model If you re following along in Chapter 21 of the Freedman Pisani Purves text then it s perfectly equivalent to use the following Box Model to produce your results 1 Start with a box containing tickets labeled with 0 s and 1 s on them The proportion of 1 s is p and the proportion of 0 s is 1 p 2 Calculate the mean and SD of the box 3 Find the standard deviation of the approval rating by dividing the SD of the box by 4 Then since we don t actually know 9 estimate these numbers using the empirical mean 13 and SD V 150 ij 5 Construct a 95 con dence interval using the formula 13 i 196 151 15 April 10th7 2007 Statistics Page 17 David Sliilane UC1 Berkeley Interpreting Con dence Intervals 0 Before we generate a con dence interval we can say that the interval we get will have probability 095 of containing the true mean 0 Once we generate a speci c interval it either contains the true mean or doesn t It s a very common mistake to say that the speci c interval we obtain contains the mean with probability 095 However from our viewpoint the truth is not a random variable so this interpretation is not valid 0 What we can say is that if we repeated the experiment a large number of times then approximately 95 of the con dence intervals we generate will contain the true mean April 10th 2007 Statistics Page 18 David Sliilane UC1 Berkeley Example Tarja Halonen Approval Poll A total of n 1000 Finns were surveyed independently With replacement to determine Whether they approve of Tarja Halonen s job performance as president A total of 573 respondents approved and 427 did not Construct a 95 con dence interval for Tarja s approval rating 151 15 23 i 2095 X i 2095 71X 39I L 0573 1 0573 0573 i 196 0542 0604 The margin of error is about i31 for the poll April 10th7 2007 Statistics Page 19 David Shilane UC1 Berkeley Repeating the Experiment Now let s pretend that we know Tarja Halonen s approval rating is exactly 053 If we conduct a large number of polls What proportion of 95 con dence intervals Will contain her true approval I performed this experiment a total of 10000 times by simulating random numbers on a computer Which took about 4 seconds to run I ultimately found that 947 3 of the experiments generated con dence intervals containing the truth so a proportion of 0947 3 of all 95 con dence intervals contained the true value Is this a reasonable proportion Let s make another con dence interval April 10th 2007 Statistics Page 20 David Sliilane UC1 Berkeley We now have n 10000 experiments and on each one the con dence interval we generated either contained the value 053 or it didn t For 1 g 239 g n the data are of the form Y 1 if experiment i 3 CI contains 053 I 0 otherwise Because we were generating 95 con dence intervals our assumption is that PY7 1 p 095 We can validate this assumption if the 95 CI for 13 contains 095 This con dence interval is 1312095 15975 0947311960W 0942909517 Therefore the experiment produced a reasonable result that appears to validate the notion that roughly 95 of all con dence intervals will contain the true value April 10th 2007 Statistics Page 21 Producing Data We previously focused on ways to analyze data that has already been collected I Summary statistics I Look for patterns in the data I Relationships between variables Conclusions from exploratory data analysis alone is often not su icient because striking patterns in the data can arise from many sources ie lurking variables We will focus on producing trustworthy data and how to judge the quality of data produced by others The design for how the data is collected is the most important prerequisite for trustworthy statistical inference Sampling and experiments are used for collecting and producing data Observational Studies I An observational study observes individuals and measures variables of interest but does not attempt to in uence the responses I A sample survey is an example of an observational study A sample is a small group of people that is used to represent the larger population Example Opinion polls report the view of the entire population base on interviews with a sample of 1000 people A census attempts to contact every individual in the entire populations often expensive very time consuming and inaccurate lf goal is to get a picture of the entire population disturbed as little as possible by the act of gathering information observational studies are used First Steps For Data Collection When you collect and produce data you need to know the following I The individuals of interest for the study I The variables to be measured must be clearly de ned and measured accurately I Observational studies can not control for lurking variables so be careful when drawing conclusions from them Example Simpson s paradox Experiments I An experiment deliberately imposes some treatment on the individuals in order to observe their response I Example assigning some mice to high doses of saccharine and some to a control diet and observing their respective incidences of cancer I For understanding cause and effect experiments are the only source for obtaining fully convincing data I Generally experiments are preferred over observational studies especially for establishing causality but they may be either impossible to conduct or unethical Design of Experiments I Experimental units the individuals on which the experiments is done I Treatment A speci c experimental condition applied to the units I Randomization The use of chance to divide the experimental units into groups Factors are often called the explanatory variables Levels are speci c values of each factor that are applied to the experimental units Population and Samples The population of interest is the group of individuals about which we want information The sample is a part of the population from which we actually collect information from which we try to draw conclusions about the whole We seek to design a sample that is representative of the population Some Bad Ideas I Selfselection for example callin or instant polls Respondents are not representative 7 they tend to have strong opinions and may be members of a nonrepresentative TV channel or website audience Convenience sampling for example mall customers They tend to be wealthier than the average American and are more likely to be either teens or retired Furthermore mall interviewers tend to pick cleancut individuals skewing the sample even more Confounding Two variables are confounded when their effect on a response variable can not be distinguished from each other For example suppose a smoking study contains individuals who are either male smokers or female nonsmokers If the smoking group has a higher incidence of lung cancer than the nonsmoking group we cannot tell if the effect is due to smoking or gender Smoking and gender are confounded This is an exaggerated example but confounding may be subtle 7 and one of the confounded variables may not even have been measured Simple Random Samples SRS The bad sampling schemes above lead to bias 7 a systematic favoring of some outcomes over others We seek an unbiased sampling scheme A simple random sample SRS of size 71 consists of 71 individuals chosen from the population in such a way that every set of 71 individuals has an equal chance of being selected This can be accomplished by the proverbial drawing numbers from a hat77 I Use a computer I Use a random number table To choose an SRS by hand rst label each individual in the population using the smallest possible labels The use the random number table to select labels at random Throw out any labels that do not correspond to an individual SRS or Not ls each of the following samples an SRS or not I A deck of cards is shu led and the top ve dealt I A telephone survey is conducted by dialing telephone numbers at random ie each valid phone number is equally likely I A sample of 10 of the Berkeley student body is chosen by numbering the students 1 N drawing a random integerz39 from 1 to 10 and drawing every tenth student beginning with 239 eg ifz39 5 students 51525 are chosen Multistage Samples As the name would imply a multistage sample is drawn in stages This is often done for nationwide samples of families households or individuals Cost of sending interviewers to the widely scattered households would be too high The Current Population Survey for data on employment and unemployment I Stage 1 divide the US into 2007 geographical areas called Primary Sampling Units Select a sample of 754 PSUs I Stage 2 divide each PSU selected in stage 1 into small areas called Census Blocks Stratify the blocks using ethnic and other information and take a strati ed sample of the blocks in each PSU I Stage 3 Group the housing units in each block into clusters of four nearby units Strati ed Samples In general a probability sample is a sample chosen in such a way that we know what samples are possible and what probability each possible sample has Often an SRS is not practical and we need alternative types of probability samples To select a strati ed random sample rst divide the population into groups of similar individuals called strata Then choose a separate SRS in each stratum and combine these SRSs to form the full sample A strati ed design can produce more exact information than an SRS of the same size by taking the advantage of the fact that individuals in the same stratum are similar to one another Example a populations of election districts might be divided into urban suburban and rural strata lnterview the households in a random sample of these clusters Final sample consists of clusters of nearby households that an interviewer can easily visit Potential Problems with Surveys Undercoverage some groups in the population may be left out eg the homeless prison inmates students in dormitories households without telephones Nonresponse a selected individual can t be contacted or refuses to cooperate Tm may ha a gamma mum For mph mg yaw Kmmh mm momma a mgphm survey m whmhput 0 2m households can 1553 mm mm at mm mm m would mt mgr mg interview The nonresponse rate is 53 Response bias Respondents may lie particularly about illegal or unpopular behavior Wording effects Leading questions or even seemingly innocuous words that have positive or negative connotations may affect survey results Statistical Inference I Typical situation We want to answer a question about a population of individuals I To answer the question we use a sample of individuals from the population Using the sample we try to draw conclusions about the entire population I Parameter a number unknown in practice that describes the population We will call this p I Statistic a number describing the sample changes from sample to sample we will call this 33 I A statistic is used to estimate an unknown parameter Example US presidential election forecast 1936 Literary Digest mailed questionnaires to 10 million people 25 of voters at the time 24 million people responded Their prediction Landon 57 Roosevelt 43 Actual result Roosevelt 62 Landon 38 What went wrong Selection bias telephone books club memberships mail order lists automobile ownership lists Noniresptmse bias Only 24 responded and these were biased toward the Republicans Gallup Poll surveyed 50000 people and correctly predicted Roosevelt s Victory Example We are interested in the percentage of Americans adults that nd shopping for clothes frustrating and time consuming 2500 people are selected using a simple random sample and each individual is interviewed 1650 individuals in the sample agreed that shopping is often frustrating and time consuming I What is the population I What is the sample I What is the parameter I What is the statistic We want to estimate the par mater p percentage of Americans that nd shopping frustrating and time consuming Using the sample we have that 33 066 66 I If a second random sample of 2500 adults is taken would we expect exactly 1650 people to agree that shopping is frustrating I Nol The new sample will have different people in it so the results will not be the same The new sample may have 1440 people agree that shopping is often frustrating and 33 i576 576 for this sample I The value of 33 will vary from sample to sample The sampling distribution of a statistic is the distribution of the statistic in all possible samples of the same size from the same population Bias and Variability I Bias concerns the center of the sampling distribution A statistic is unbiased if the mean of its sampling distribution is equal to the true value of the parameter being estimatedi Bias of an estimator mean of sampling distribution true value of parameter I The variability of a statistic is about the spread of its sampling distribution The spread is determined by the sampling design and the sample size Shrinks as sample size grows Variability of an estimator SD of sampling distribution Sampling Distribution for the Shopping Example I The shape of the distribution of 33 will be approximately normal We will see why later I The center or mean of the distribution will be 33 This is true for both large and small samples The spread of the distribution depends on the size of the sample The values of 33 for samples of size 2500 will be much less spread out than the values from samples of size 100 Inference for Regression o The simple linear regression model 0 Estimating regression parameters 0 Con dence intervals and signi cance tests for regression parameters 0 Inference about prediction 0 Analysis of variance for regression o The regression fallacy 1 Simple Linear Regression Model The simple linear regression model states that the response variable y and the explanatory variable x have a linear relationship of the form 24 30 3190 e where 0 BO and 31 are the y intercept and the slope of the true population regression line 0 eNOa o The 6 corresponding to the pairs mi are independent of each other 0 Given m y has mean 30 lm and variance 02 Ewe U ylac 30 lm is called the population regression line Varylz 05W a2 Simple Linear Regression Earlier in the course we discussed how to nd the best tting line for bivariate data Here we consider that problem from the perspective of statistical inference Suppose we observe pairs of observations 17y17 7n7 yn For example 0 xfather s height yson s height 0 miclterm score y nal score 0 xtemperature yyielcl The values of x de ne different groups of subjects which we think of as belonging to subpopulations one for each possible value of xi 2 ylw in the subpopulation with a certain value of xi Let gym and 0 denote the mean and variance of y Under the linear regression model With equal variance py w 30 31x ancl 036 02 possibly after transformation Estimating the Regression Parameters by LeastSquares Given a sample of 71 pairs of observations 1 yl mmyn we use the method of least squares to estimate the unknown parameters BO 31 and a This gives us the tted line 23 50 51 where 0 b1 w r is the estimate of 31 0 b0 y 7 bli is the estimate of 30 Recall that the residual is the difference between the observed value and the predicted value 6239 yibob1m 2427192 The sample variance of ei can be used to estimate 02 s is called the regression standard error and it has n 7 2 degrees of freedom Why 71 7 2 degrees of freedom 015 for Regression Parameters Under the assumption that e N N07 a 9le 17 lt6 may bONNltBO70W We don t know a so we will use 3 to estimate it This leads to 25 con dence intervals for 30 and 31 Conditions for regression inference o The sample is an SRS from the population 0 There is a linear relationship in the population We check this condition by assessing the linearity of a scatterplot of the sample data The standard deviation of the responses about the population line is the same for all values of the explanatory variable We check this by plotting the residuals and observing Whether or not the spread of the observations around the leastsquares line is roughly uniform as x variesi o The response varies Normally about the population regression line We check this condition by observing a Normal quantile plot of the residuals Note that the last three conditions are statements about the population that cannot be veri ed directly We use the sample to assess their reasonabilityi A level 17 04 con dence interval for 30 is given by 120 725 SEbO7 120 25 SEb0 where 25 is the upper 022 critical value of the 25 2 distribution and SEb0 s A level 17 04 con dence interval for 31 is given by 121 725 SEbl7 121 25 SEb1 where 25 is the upper 022 critical value of the 25 2 distribution and S SEb1 Z 351 7 W Hypothesis Tests for Regression Parameters To test the hypothesis H0 Bl a we use the test statistic b1 7 a SEb1 o the p value for the test statistic is found from the tn2 distribution 0 if the regression assumptions are true testing H0 Bl 0 corresponds to testing whether or not there is a linear relationship between y and z A similar test can be performed for 30 but it is rarely of interest Performing this regression analysis in STATA yields the following results regress damage distance Source 1 ss df MS Number of obs 15 aaaaaaa 747777777quotquotWWWWWWWWWa 1 13 15539 Model 1 341755403 1 341755403 Frob gt r 00000 Residuall 507509359 13 535545053 Risquared 09235 aaaaaaa 747777777quotquotWWWWWWWWWa 4d Risquared 0 9175 Total 1 91151739 14 55103335 Root MSE 23153 damage 1 Coe1 Std Err o moi 957 Cont Interval dstancel4919331 3927473 12525 0000 4070351 5757311 consl1027793 1420273 7237 0000 7209505 1334525 The following are a residual plot and a normal quantile plot of the residuals Rudunls vs a mummy mums quotmm are le 2 1 Residual n name m1 mmem Dummies Example Fire damage and distance to re station Suppose a re insurance company wants to relate the amount of re damage in major residential res to the distance between the burning house and the nearest re station The study is to be conducted in a large suburb of a major city a sample of 15 recent res in this suburb is selected The amount of damage in thousands of dollars and the distance in miles between the re and the nearest re station are recorded in each re Obs D1sc Damage 1 o 7 141 2 1 1 17 3 nungens nllnncem ruesmmm 3 1 3 173 4 2 1 240 39 5 2 3 23 1 a 2 a 19 a g 3 39 a c 7 3 o 22 3 1 a e 3 3 1 27 5 E g e 39 39 9 3 4 25 2 H 7 39 1o 3 3 251 m 39 11 4 3 31 3 V 391 1 1 1 1 1 1 2 a 5 a 12 4 5 31 3 551mm 13 4 3 354 14 5 5 350 15 5 1 43 2 10 Example cont The tted line is damage 1028 492 dist Suppose we want to predict the mean amount of damage for res 2 miles from the nearest re station In this case x 2 and our prediction is 1028 492 X 2 2012 Inference about Prediction What if we want to predict the amount of damage of a burning house which is 2 miles from the nearest re station Still the prediction is 1028 492 X 2 2012 The predicted values are the same but they have different standard errors Individual burning houses which are 2 miles away from the re station don t have the same amount of damage so the prediction for individual amount of damage has larger standard error than the prediction for mean amount of damage 015 for the Mean Response For a speci c value of m say mquot the assumption is that y comes from a Nguyw a distribution where Mylacquot 50 51K Plugging in our estimates of 30 and 31 gym is estimated by yw be haquot and a level 1 7 04 con dence interval for the mean response gym is given by ylacquot j where 25 is the upper 042 critical value of the tn2 distribution and mm 7 s How accurate is this estimate The error here will be larger than the error for the mean response SEXLAW because there is error in estimating gym as well as error in drawing a value from the normal distribution Nguyw a A level 1 7 Oz prediction interval for a future observation y corresponding to zquot is given by 3 i 7595819 where 25 is the upper 042 critical value of the 2272 distribution and Prediction Interval for a Future Observation Suppose we want to predict a speci c observation value at z mquot At each mquot y N Nguyw a We want to predict a y drawn from this distribution Our best guess is the estimated mean of the distribution 23 ylw b0 51f 14 Analysis of Variance for Regression Analysis of variance is the term for statistical analyses that break down the variation in data into separate pieces that correspond to different sources of variation In the regression setting the observed variation in the responses comes from two sources 0 As the explanatory variable x changes it pulls the response with it along the regression line This is the variation along the line or regression sum of squares SSRegression 7 m2 i1 c When z is held xed y still varies because not all individuals who share a common z have the same response y This is the variation about the line or residual sum of squares SSResidual 2W 7 13 i1 The ANOVA Equation It turns out that SSResidual and SSRegression together account for all the variation in y The ANOVA F Statistic 7 m2 7 g 7 331 As an alternative test of the hypothesis i1 i1 i1 H0 Bl O we use the F statistic SSTotal SSRegression SSResidual i MSRegression F MSResidual SS RegressiondfRegression SSResidualdfResidual u w w gt2 The degrees of freedom break down in a similar manner SSTotal SSRegression SSResidual SE71 7 2 Dividing a sum of squares by its degrees of 7 t freedom gives a mean square Under H0 F N F1n72 i n I 7 A 2 MSResidual Sam SM s2 where F1 2 is an F distribution with 1 and dfRes1dual n 7 2 n 7 2 degrees of freedom Regression SS R2 2 Total ss T 17 18 The Regression Fallacy Sir Francis Galton 182271911 who was the rst to apply regression to biological and psychological data looked at examples such as the heights of children versus the heights of their parents He found that the taller than average parents tended to have children who were also taller than average but not as tall as their parents Galton called this fact regression toward mediocrity As another example students who score at the bottom on the rst exam in a course are likely to do better on the second exam Is it because they work harder Example Background music and consumer behavior In a study conducted in a Northern Ireland supermarket researchers counted the number of bottles of F rench Italian and other wine IUfeI39ence fOI39 TWOway Tables purchased while shoppers were subject to one of three treatments no music French accordion o Two way table for categorical dataset music and Italian string music Chi39square teSt for two39way table The following twoway table summarizes the 0 Models for two way tables data 7 Examining independence between Music Variables Wine None French Italian Total 7 Comparing several populations French 30 39 30 99 Italian 11 1 19 31 Other 43 35 35 113 Total 84 75 84 243 1 2 Example cont The X2test for a r x c Table The table of counts looks suspiciously like the joint distribution tables we studied earlieri Indeed from Hypotheses these counts we can ascertain the empirical joint H0 the row and column variables are distribution marginal distributions and conditional independent ie there is no relationship distributions of wine type and music type between the two Music 0 Ha the row and column variables are Wine None French Italian Total dependent French 0123 01160 01123 01407 Intuition for the Test Italian 01045 01004 01078 01128 Other 0177 01144 01144 01465 Suppose H0 is true and the two variables are independent What counts would we expect to Total 01346 01309 01346 11000 Observe We are interested in determining whether there is Recall that under the independence assumption relationship between the row variable wine type and the column variable music typei PA B PAPB If this were the true distribution then the answer Thus7 for each cell we have would be clear music and wine are not independent t t 1 1 t t 1 so there is a relationship Expected Cell Count w total count However this table is random and we want to know whether or not music and wine are independent under our teSt W111 be based on a measure Of how far the true distribution This requires a statistical test the 01756741611 table 25 fTO m the EIPECtEd tabla Example cont For the supermarket example the expected counts are Music Wine None French Italian Total French 3422 3056 3422 99 Italian 1072 957 1072 31 Other 3906 3488 3906 113 Total 84 75 84 243 The X2 ChiSquared Statistic To measure how far this expected table is from the obsemed table we will use the following test statistic Observed 7 Expected2 X2 Expected all cells 5 What does the X2 distribution look like ChiSquared Densities Degrees of Density 1 l Unlike the Normal or t distributions the X2 distribution takes values in 0 00 As with the t distribution the exact shape of the X2 distribution depends on its degrees of freedom The X2 Distribution Under H0 the X2 test statistic has an approximate X2 distribution with r 7 1e 7 1 2 degrees of freedom denoted Xwiwcil Why r 71c 71397 Recall that our expected table is based on some quantities estimated from the data namely the row and column totals Once these totals are known lling in any r 7 1e 7 1 undetermined table entries actually gives us the whole table Thus there are only r 7 1e 7 1 freely varying quantities in the table 6 p Value for the X2Test If the observed and expected counts are very different X2 will be large indicating evidence against H0 Thus the p value is always based on the right hand tail of the distribution There is no notion of a two tailed test in this context The p value is therefore PX2T71671 3 X2 Recall that X2 has an approximate M2 ka distribution When is the approximation valid For any two way table larger than 2 x 2 we require that the average expected cell count is at least 5 and each expected count is at least one For 2 x 2 tables we require that each expected count be at least 5 Example cont Let s get back to our example Recall the observed and expected counts Observed Expected Music Music Wine None F It None F It Tot French 30 39 3O 34 22 3O 56 34 22 99 Italian 11 1 19 10 72 9 57 10 72 31 the 43 35 35 39 06 34 88 39 06 113 Total 84 75 84 84 75 84 243 2 30 e 34222 39 e 30562 30 e 34222 X 3422 1 30156 1 3422 35 e 34882 35 e 39062 quot39 34A 8 1 39 06 1828 The table is 3 X 3 so there are 7 7 1c 7 1 2 X 2 4 degrees of freedoml Finally the p value is found from the xi table 0001 PM 2 1828 g 0002 2 Comparing several populations Suppose we select independent SRSs from each of nC We then classify each individual according to a categorical 0 populations of sizes 711712 response variable with r possible values the same across populations This yields a r x 0 table and a X2 test can be used to test H0 Distribution of the response variable is the same in all populations Ha Distributions of response variables are not all the same Example Suppose we select independent SRSs of Psychology Biology and Math majors of sizes 40 39 35 and classify each individual by GPA range Then we can use a X2 test to ascertain whether or not the distribution of grades is the same in all three populations Models for TwoWay Tables The X2 test for the presence of a relationship between two directions in a two way table is valid for data produced by several different study designs although the exact null hypothesis varies 1 Examining independence between variables Suppose we select an SRS of size n from a population and classify each individual according to 2 categorical variables Then a X2 test can be used to test H0 The two variables are independent Ha Not independent Example Suppose we collect an SRS of 114 college students and categorize each by major and GPA eg 005051 354 Then we can use a X2 test to ascertain whether grades and major are independent Example Literary Analysis Rice 1995 When Jane Austen died she left the novel Sandman only partially words in several chapters from Various works Austen Imitatox se and Emma Sanditon I Sanditon 11 Word Sensibility a 147 186 101 83 an 25 26 11 29 this 32 39 15 15 that 94 105 37 22 1th 59 74 28 43 Without 18 1O 10 4 TOTAL 375 440 202 196 Question 1 is there consistency in Austen s work do the frequencies th which Austen used these words change from work t work Answer X2 12 27 df prvalue Question 2 Was the imitate successful are the frequencies of the words t e same in Austen s work and the imitatoxls work Tests of Signi cance Outline I General Procedure for Hypothesis Tbsting 7 Null and Alternative Hypotheses 7 Test Statistics 7 p values I Interpretation of the Signi cance Level I Tests for a Population Mean I Interpretation of p values I Statistical vs Practical Signi cance I Con dence Intervals and Hypothesis Tests I Potential Abuses of Tests Testing Hypotheses A hypothesis test is an assessment of the evidence provided by the data in favor of or against some claim about the population For example suppose we perform a randomized experiment or take a random sample and calculate some sample statistic say the sample mean We want to decide if the observed value of the sample statistic is consistent with some hypothesized value of the corresponding population parameter If the observed and hypothesized value differ as they almost certainly will is the difference due to an incorrect hypothesis or merely due to chance variation A con dence interval is a very useful statistical inference tool when the goal is to estimate a population parameter When the goal is to assess the evidence provided by the data in favor of some claim about the population test of signi cance are used Example Filling Coke Bottles A machine at a Coke production plant is designed to ll bottles with 16oz of Coke The actual amount varies slightly from bottle to bottle From past experience it is known that the SD 02oz A SRS of 100 bottles lled by the machine has a mean 1594oz per bottle Is this evidence that the machine needs to be recalibrated or could this difference be a result of random variation General Procedure for Hypotheses Testing 1 Formulate the null hypothesis and the alternative hypothesis I The null hypothesis H0 is the statement being tested Usually it states that the difference between the observed value and the hypothesized value is only due to chance variation For example M 16 oz I The alternative hypothesis HE is the statement we will favor if we nd evidence that the null hypothesis is false It usually states that there is a real difference between the observed and hypothesized values For example M 16 M gt16 or M lt 16 A test is called I twosided if HE is of the form M 16 I onesided if HE is of the form M gt 16 or M lt 16 Example GRE Scores The mean score of all examinees on the Verbal and Quantitative sections of the GRE is about 1040 Suppose 50 randomly sampled UC Berkeley graduate students have a mean GRE VQ score of 1310 We are interested in determining if a mean GRE VQ score of 1310 gives evidence that as a whole Berkeley graduate students have a higher mean GRE score than the national average What is H07 What is Ha For the Coke example we have that the mean of the sample is 1594 oz The population mean speci ed by the null hypothesis is 16 oz A test statistic is 1594 716 z 7 02x100 We ll have more to say about this in a moment General Procedure for Hypotheses Testing cont 2 Calculate the test statistic on which the test will be based The test statistic measures the difference between the observed data and what would be expected if the null hypothesis were true When H0 is true we expect the estimate based on the sample to take a value near the paramater value speci ed by HO Our goal is to answer the question How extreme is the value calculated from the sample from what we would expect under the null hypothesis In many common situations the test statistic has the form estimate hypothesized value standard deviation of the estimate 3 Find the pvalue of the observed result I The p value is the probability of observing a test statistic as extreme or more extreme than actually observed assuming the null hypothesis H0 is true I The smaller the pvalue the stronger the evidence against the null hypothesis I if the p value is as small or smaller than some number a eg 001 005 we say that the result is statistically signi cant at level a I a is called the signi cance level of the test In the case of the Coke example p 00013 for a onesided test or p 00026 for a twosided test Once again we ll have more to say about this in a moment Interpretation of the Signi cance Level To perform a test of signi cance level a we perform the previous three steps and then reject HO the pwalue is less than a The following outcomes are possible when conducting a test Our Decision Reality HO H a Type 1 Ho Error e H Ha T yp Error Suppose H0 is actually true If we draw many samples and perform a test for each one a of these tests will incorrectly reject Hot In other words a is the probability that we will make a Type I error Type II error is related to the notion of the power of a test which we will discuss later Tests for a Population Mean In the preceding example we were able to perform an exact Binomial test Frequently an exact test is impractical but we can use the approximate normality of means to conduct an approximate test Suppose we want to test the hypothesis that a has a speci c value Ho n no Since i estimates a the test is based on i which has a perhaps approximately Normal distribution Thus i 7 Mo f is a standard normal random variable under the null hypothesis pvalues for different alternative hypotheses 0 Ho a gt no 7 pvalue is PZ 2 2 area of rightrhand tail H no 7 pvalue is PZ g 2 area of leftrhand tail 0 He a no 7 pvalue is 2PZ Z area of both tails Example An Exact Binomial Tbst lh the lest 51 World Serles through 2003 there heve oeeh 24 seveh gerhe Suppose we wlsh to test he hy othesls so as ee mom a WW genes we Wehehdem mo em gem mm Powwow g WWW For the elterhetlve hypothesls let s use the geherlc Ha The model m 80 re meme 4 mm t r are e M e e gene Whet ls the peveluei We heed to tlhd m such thet PHOM 2 Assumlhg different veers World Serles ere lhdeoehdeht le thet the lest 51 World Serles ere s from the ooouletloh of World Series the humoer of seveh gerhe serles lh 51 tnals ls 551 515 PMZQO0086 PMZQJ0049 We weht to heve e slghlhcehce level of no more than a 5 so the crltlcel velue wlll he 21 Do we reject 80 et slghlslcehce level d o 057 Thls ls ust e metter of j checklhg whether our ooserved velue of M 24 exceeds the crltlcel velue 21 lt does so we reject HO Example Filling Coke Bottles contr We are interested in assessing whether or not the machine needs to be recalibrated which will be the case if it is systematically over or under lling bottlesr Thus we will use the hypotheses H01n16 bugle Recall that 2 1594 a 02 and n 100 Thus 53 Mo 73 Z o l The pvalue for a twosided test is p 2PZ 2 3 00026 If a 001 we reject Hot If a 005 we reject Hot Example TV Tubes TV tubes are taken at random and the lifetime measured n 100 a 300 and 2 1265 days Test whether the population mean is 1200 or greater than 1200 Ho n 1200 Ha n gt 1200 Under Ho i N N1200 30 z N N01 under Ho The test statistic is z w 217 and the pvalue is PZ 2 217lHo 0015 This is evidence against Ho at signi cance level 005 so we reject Ho That is we conclude that the average lifetime of TV tubes is greater than 1200 days Con dence Intervals and Hypothesis Tests A level a twosided test rejects a hypothesis Ho n no exactly when the value of no falls outside a 1 7 a con dence interval for n For example consider a twosided test of the following hypotheses Ho 1 M 0 Ha I H 7t 0 at the signi cance level a 05 o If no is a value inside the 95 con dence interval for n then this test will have a pvalue greater than 05 and therefore will not reject Ho o If no is a value outside the 95 con dence interval for n then this test will have a pvalue smaller than 05 and therefore will reject Ho A Rough Interpretation of pvalues pvalue lnterpretation p gt 010 no evidence against Ho 005 lt p g 010 weak evidence against Ho 001 lt p g 005 evidence against Ho p g 001 strong evidence against Ho Statistical vs Practical Signi cance Saying that a result is statistically signi cant does not signify that it is large or necessarily important That decision depends on the particulars of the problem A statistically signi cant result only says that there is substantial evidence that Ho is false Failure to reject Ho does not imply that Ho is correct It only implies that we have insu icient evidence to conclude that Ho is incorrect Example A particular area contains 8000 condominium units In a survey of the occupants a simple random sample of size 100 yields the information that there are 160 motor vehicles in the sample giving an average number of motor vehicles per unit of 16 with a sample standard deviation of 08 Construct a con dence interval for the total number of vehicles in the areas The city claims that there are only 11000 vehicles in the area so there is no need for a new garage What do you think More on Constructing Hypothesis Tests Hypothesis always refer to some population or model not to a particular outcome As a result HO and HE must be expressed in terms of some population parameter or parameters Ha typically expresses the effect that we hope to nd evidence for So HE is usually carefully thought out rst We then set up HO to be the case when the hopefor effect is not present It is not always clear whether Ha should be one sided or twosided ie does the parameter differ from its null hypothesis value in a speci ed direction Note You are not allowed to look at the data rst and then frame Ha to t what that data show Potential Abuses of Tests In many applications a researcher constructs a null hypotheses with the intent of discrediting it For example 0 Ho new drug has the same effect as placebo 0 Ho men and women are paid equally A small p value can help a drug company can get a drug approved by the FDA Similarly a researcher may have an easier time publishing his results if the pvalue is smaller than 005 Because of that we have to be aware of the following potential abuses 0 Using oneisided tests to make the pvalue onehalf as big 0 Conducting repeated sampling and testing and reporting only the lowest pvalue 0 Testing many hypothesis or testing the same hypothesis on many different subgroups In the last two even if there is actually no effect you will probably get at least one small pvalue Tests of Significance Outline General Procedure for Hypothesis Testing Null and Alternative Hypotheses Test Statistics p values Interpretation of the Significance Level Tests for a Population Mean Interpretation of p values Statistical vs Practical Significance Confidence Intervals and Hypothesis Tests Potential Abuses of Tests Testing Hypotheses A hypothesis test is an assessment of the evidence provided by the data in favor of or against some claim about the population For example suppose we perform a randomized ex periment or take a random sample and calculate some sample statistic say the sample mean We want to decide if the observed value of the sam ple statistic is consistent with some hypothesized value of the corresponding population parameter If the observed and hypothesized value differ as they almost certainly will is the difference due to an in correct hypothesis or merely due to chance varia tion Example Filling Coke Bottles A machine at a Coke production plant is designed to fill bottles with 1602 of Coke The actual amount varies slightly from bottle to bottle From past ex perience it is known that the SD 0202 A SRS of 100 bottles filled by the machine has a mean 1594oz per bottle Is this evidence that the machine needs to be recalibrated or could this dif ference be a result of random variation Example GRE Scores The mean score of all egtltaminees on the Verbal and Quantitative sections of the GRE is about 1040 Suppose 14 randomly sampled U of C graduate stu dents had a mean GRE VQ score of 1310 Does this indicate that as a whole U of C graduate stu dents have a higher mean GRE score than the na tional average General Procedure for Hypotheses Testing 1 Formulate the null hypothesis and the alternative hypothesis 0 The null hypothesis H0 is the statement being tested Usually it states that the difference be tween the observed value and the hypothesized value is only due to chance variation For example y 16 oz The alternative hypothesis Ha is the statement we will favor if we find evidence that the null hypothesis is false It usually states that there is a real difference between the observed and hypothesized values For example y 7 16 pgt 16 or plt 16 A test is called o two sided if Ha is of the form M 7 16 o one sided if Ha is of the form M gt 16 or M lt 16 2 Calculate the test statistic on which the test will be based The test statistic measures the difference between the observed data and what would be expected if the null hypothesis were true Our goal is to answer the question How many stan dard errors is the observed sample value from the hypothesized value under the null hypothesis For the Coke example the test statistic is 1594 7 16 02x100 2 We39ll have more to say about this in a moment 3 Find the p value of the observed result The p value is the probability of observing a test statistic as extreme or more extreme than actu ally observed assuming the null hypothesis H0 is true The smaller the p value the stronger the evi dence against the null hypothesis if the p value is as small or smaller than some number a eg 001 005 we say that the result is statistically significant at level a a is called the significance level of the test In the case of the Coke egtltample p 00013 for a one sided test or p 00026 for a two sided test Once again we39ll have more to say about this in a moment Interpretation of the Significance Level To perform a test of significance level a we per form the previous three steps and then reject H0 if the p value is less than a The following outcomes are possible when conduct ing a test Our Decision 0 H0 Type II Error Reality Ha Type I Error Ha y Suppose H0 is actually true If we draw many sam ples and perform a test for each one a of these tests will incorrectly reject H0 In other words a is the probability that we will make a Type I error Type II error is related to the notion of the power of a test which we will discuss later Example An Exact Binomial Test In the last 51 World Series through 2003 there have been 24 seven game series Suppose we Wish to test the hypothesis ames Within a World Series are independent With each team having probability of Winning For the alternative hypothesis let39s use the generic Hz The model in H0 is incorrect Let X denote the number of games in the World Seri Under Ho X has the followmg distribution It PX For our test statistic let39sjust use M seven game seri What is the pvalue We need to find m such that PHDM 2 m m 0 05 Assuming different years39 World Seri are independent i e that the last 51 World Series are an from the o ulation of World Seri the number of seven game seri in 51 trials is B51516 PM 2 20 0 086 PM 2 21 0 049 We want to have a significance level of no more than a 5 so the critical value Will be 21 Do we reject H0 at significance level a 005 This isjust a matter of checking whether our observed value of M 24 exceeds the critical value 21 It does so we reject H0 Tests for a Population Mean In the preceding example we were able to perform an exact Binomial test Frequently an exact test is impractical but we can use the approximate normality of means to conduct an approximate tes Suppose we want to test the hypothesis that b has a specific value Ho 1 M 0 Since a estimates p the test is based on 9 which has a per7 haps approximately Normal distribution Thus a i 0 fW is a standard normal random variable under the null nypotneL sis pvalues for different alternative hypotheses Ha p gt Mo 7 pvalue is PZ 2 2 area of rightihand tail 3 Ha p lt po 7 pvalue is PZ g 2 area of leftihand tail I Ha pi pg 7 pvalue is 2PZ 2 lzl area of both tails Example Filling Coke Bottles cont We are interested in assessing whether or not the machine needs to be recalibrated which will be the case if it is systematically over or under filling bot tles Thus we will use the hypotheses HOZMl6 Haip7bl6 Recall that 939s 2 1594 a 02 and n 100 Thus 5M0 av quot3 Z The p value for a two sided test is p 2PZ 2 3 00026 If a 001 we reject H0 If a 005 we reject H0 Example TV Tubes TV tubes are taken at random and the lifetime mea sured n 2 1000 300 and 939s 2 1265 days Test whether the population mean is 1200 or greater than 1200 H01 1200 Ha 11gt 1200 Under 10 N120030 2 N0 1 under H0 The test statistic is z W 217 and the p value is PZ 2 217lH0 0015 This is evidence against H0 at significance level 005 so we reject H0 That is we conclude that the average lifetime of TV tubes is greater than 1200 days A Rough Interpretation of p values p value Interpretation p gt 010 no evidence against H0 005 ltp g 010 weak evidence against H0 001 ltp g 005 evidence against H0 p g 001 strong evidence against H0 Statistical vs Practical Significance Saying that a result is statistically significant does not signify that it is large or necessarin important That decision depends on the particulars of the prob lem A statistically significant result only says that there is substantial evidence that H0 is false Failure to reject H0 does not imply that H0 is cor rect It only implies that we have insufficient evi dence to conclude that H0 is incorrect Confidence Intervals and Hypothesis Tests A level a twoisided test rejects a hypothesis Ho u no exactly when the value of no falls outside a 1 7a confidence interval for b For example consider a twoisided test of the following hyi potheses Ho 1 M 0 Ha 3 H 7 M0 at the significance level a I05 I If no is a value inside the 95 confidence interval for u then this test will have a pvalue greater than 05 and therefore will not reject Ho I If no is a value outside the 95 confidence interval for p then this test will have a pvalue smaller than 05 and therefore will reject Ho Example A particular area contains 8000 condominium units In a survey of the occupants a simple random sam ple of size 100 yields the information that the aver age number of motor vehicles per unit is 16 with a sample standard deviation of 08 Construct a confidence interval for the total number of vehicles in the area The city claims that there are fewer than 11000 vehicles in the area so there is no need for a new garage What do you think Potential Abuses of Tests In many applications a researcher constructs a null hypotheses with the intent of discrediting it For example I Ho new drug has the same effect as placebo I Ho men and women are paid equally A small p value can help a drug company can get a drug ape proved by the FDA Similarly a researcher may have an easier time publishing his results if the pvalue is smaller than 005 Because of that we have to be aware of the following potential abuses I Using onesided tests to make the pvalue onehalf as big I Conducting repeated sampling and testing and reporting only the lowest pva ue I Testing many hypothesis or testing the same hypothesis on many different subgroups In the last two even if there is actually no effect you will probably get at least one small pvalue Normal Approximation to the Binomial Central Limit Theorem I Binomial Calculations for Compound Events I The Normal Approximation to the Binomial I Parameters of the Approximating Distribution Behavior of the Approximation as a Function ofp I Calculations With the Normal Approximation I The Continuity Correction I Sampling Distributions I The Mean and Standard Deviation of 2 I The Central Limit Theorem I Normal Approximation to the Binomial Revisited LetXY1Y2Ynz1n Whatis EX7 MXEXEY1Y2Yn EY1 EY2 EltYn iEO z What is VarX7 7 VarX Va39rOl YQYn Va39rY1Va39rYz Va rYn ZVaMK 21 241 2 np1 2 What is the distribution of X7 Shopping Revisited Let p Where 0 lt p lt 1 be the proportion of American adults that nd shopping frustrating What is the proportion of American adults that do not nd shopping frustrating We take a simple random sample of 71 people from this population K Y2 Yn Where K 1 if the ith individual in the sample nds shopping frustrating and K 0 otherwise What is the expected value of Y1 MY ECG 01ip1p 27 What is the expected value for K Where 1 g 239 g n What is the variance of Y1 EKYrMYJZl Eer2l 021p1p2p p21p1p2p p1pp1p 270 X N Bn p Recall that the equation for binomial probabilities is Binomial Calculations for Compound Events For a compound event such as PX g k the probability is given by Png ijpXj The Normal Approximation to the Binomial Shopping Continued Suppose we draw an SRS of 1500 American Adults and are selected using a simple random sample and each individual is interviewed If p 12 what is the expted number of people in the sample that nd shopping frustrating It turns out that as 71 gets larger the Binomial EltXgt np 150002 180 distribution looks increasingly like the Normal dis What is the probability that the sample contains tribum n 170 or less people that nd shopping frustrating PX 170 imx j 170 2 190 0 12j0i881500j jO That s pretty ugly Is there an easier way Conslder the following Binomial histograms sacn representing 10000 samolss frDm a Binomial distribution Wicn o o 1 us iiln Parameters of the Approximating Distribution i i i 3 hi i i i m The approximating Normal distribution has the will will same mean and standard deviation as the 5n m underlying Binomial distribution Thus if X N Bnp having mean EX np and standard deviation SDX np1 7 p it is approximated by a Normal distribution X is a a approximately N lax np ax np1 7 17 X is the count of successes What about 33 quot 5quotquot quot 1quot the sample proportion of successes 33 is approximately N lax paX K 3n a an in 7a in 7a in in ma iii in Km in value value When is the approximation appropriate 1 v a E i i E g The farther p is from the larger 71 needs to be H g for the approximation to work Thus as a rule of a m D 3 thumb only use the approximation if t quotm quotquot1 g a 71p 210 and 7117 p 210 v a g H i 5 g i E i g 5 a o 9 Calculations with the Normal Approximation The Continuity Correction Recall the problem we set out to solve PltX S 170 Where X N 31500 012 The addition of 05 in the previous slide is an example of the continuity correction which is How do we calculate this using the Normal intended to re ne the approximation by approximation accounting for the fact that the Binomial If we were to draw a histogram of the distribution is discrete while the Normal 31500 012 distribution with bins of width one PX g 170 would be represented by the total area of the bins spanning distribution is continuous In general we make the following adjustments 705 05 05 15 l i i 1695 1705 PX g z x PY g 1 05 Thus using the approximating Normal PX lt I PX S I i 1 g PY S I i 0 5 distribution Y N N1801259 we calculate pltX 2 I g plty 2 I 7 0 5 PX g 170 s PY 1705 02253 PX gt z PX 2 z 1 s PY 2 2 on For reference the exact Binomial probability is 02265 so the approximation is apparently pretty good 3 se uonemgxolddv an O 10AEL SH Sampling Distributions The Normal approximation to the Binomial distribution is in fact a special case of a more general phenomenon The general reason for this phenomenon depends on the notion of a sampling distribution Consider the following setup We observe a sample of size n from some population and compute the mean In lecture 5 we de ned the sampling distribution of a statistic to be the distribution of the statistic in all possible samples of the same size from the same population If we repeatedly drew samples of size n and calculated 2 we could ascertain the sampling distribution of 2 The Mean and Standard Deviation of X What are the mean and standard deviation of X Let s be more speci c about what we mean by a sample of size n We consider the sample to be a collection of n independent and identically distributed or iid random variables X1 X2 X with common mean u and common standard deviation 7 Thus E0 E gimxi j HH n VarX Var 5 Vang 51 n 2 1 2 72 Z20 mna 1 SDX VarX A Word on Notation The thing to keep in mind is that 2 is a xed number while X is a random variable The authors of the book are not very careful in making this distinction and they denote 2 as the random variable The Central Limit Theorem Now we know that X has mean u and standard deviation but what is its distribution If X1X2 Xn are Normally distributed then X is also normally distributed Thus X N V39 j X N 7 2 M a 2 M W If X1X2 Xn are not Normally distributed then the Central Limit Theorem tells us that X is approximately Normal The Central Limit Theorem Suppose X1 X2 Xn are iid random variables with mean u and nite standard deviation 0 If n is su iciently large the sampling distribution of X is approximately Normal with mean u and standard deviation f Normal Approximation to the Binomial Revisited What does all this have to do With the Normal approximation to the Binomial An observation from a Binomial distribution Y is actually the sum of n independent observations from a simpler distribution the Bernoulli distribution A Bernoulli random variable X takes the value 1 With probability p or the value 0 With probability 1 7 p and has EX p and SDX P1 p Letting X1 Xn be 71 iid Bernoulli random variables Y 2X 7 21 According to the CLT X has a N0 0 distribution Where M p and a 7 It turns out that ii is also Normal and it has nu 71p VarnX nzVarX n20 nzm np17p 77 Thus in general if X1 Xn are iid random variables With mean M and standard deviation 0 then 2 X2 NW e Leastsquares regression Cautions about correlation and regression Outline 0 Least squares regression 7 Equations of regression line slope7 intercept 7 Residuals and residual plot 7 Outliers and in uential observations 0 Cautions about correlation and regression 1 Fitting the Regression Line to Data Since we intend to predict y from x the errors of interest are mispredictions of 37 for a xed m observed y The leastsquares regression line of 37 on x is the line that minimizes sum of squared errors This is the least squares criterion Given pairs of observations 17371 7 mm 37 7 the regression line is given by 37abz where b r571 and a Q7 bi 5z LeastSquares Regression Regression describes the relationship between two variables in the situation where one variable can be used to explain or predict the other The regression line is a straight line that describes how a response variable 37 changes as an explanatory variable x changes Interpreting the Regression Model 0 The response in the model is denoted 37 to indicate that these are predictd 37 values7 not the true observed 37 values The hat denotes prediction 0 The slope of the line indicates how much 37 changes for a unit change in z o The intercept is the value of 37 for z 0 It may or not have a physical interpretation depending on whether or not x can take values near 0 c To make a prediction for an unobserved z just plug it in and calculate 37 Note that the line need not pass through the observed data points In fact7 it often will not pass through any of them Facts about Least Squares Regression o The distinction between explanatory and response variables is essential Looking at vertical deviations means that changing the axes would change the regression line 0 A change of 1 sd in z corresponds to a change of r sds in y The least squares regression line always passes through the point i g o r2 the square of the correlation is the fraction of the variation in the values of y that is explained by the least squares regression on x When reporting the results of a linear regression you should report r2 These properties depend on the least squares tting criterion and are one reason why that criterion is used 5 Residual Plots A residual plot is a scatterplot of the residuals against the explanatory variable It can be used to assess the t of the regression line Patterns to look for o Curuature indicates that the relationship is not linear 0 Increasing or decreasing spread indicates that the prediction will be less accurate in the range of explanatory variables where the spread is larger 0 Points with large residuals are outliers in the vertical direction 0 Points that are extreme in the m direction are potential high in uence points In uential observations are individuals with extreme x values that exert a strong in uence on the position of the regression line Removing them would signi cantly change the regression line Residuals Residuals are the vertical distances between the data points and the corresponding predicted values ri observed y 7 predicted y 9i i 23139 9i i a t 595i For a least squares regression7 the residuals always have mean zero A Regression Example Consider the following data on unemployment rate and unemployment expenditure for several countries Unemp Unemp Countzy Rate Exp swz O 5 O 16 in 1 4 O 19 swd 1 6 O 72 iap 2 1 0 34 Summary Statistics aut 3 3 O 38 fin 3 4 O 66 pox 4 a 0 so gex 4 7 1 17 i 6 5 no 5 2 1 20 us 5 4 o 47 g uk 6 8 O 38 g 7 o o 42 7 ans 7 o 1 o1 80 7 bel 7 6 1 99 nl 7 3 2 25 8y 7 nz 7 9 1 34 can 3 1 1 73 T f1 8 9 1 34 den 9 7 3 22 H 10 3 0 40 u 13 8 2 79 sp 15 9 2 43 Regression Example cont Regression Coef cients unamwmsn EXDm lms m oEcn quotmum 177 7 9a 51 3 039 c 0737 5 337 i 0168 Ma E39 a 7 bi E 12070168X65 o 103 n 5 m 5 Umpwm m m Resmuzl mm Unemvlwmem ale 9 Cautions about Correlation and Regression cont Lurking variables are variables not among the explanatory or response variables in a study that may in uence the interpretation of relationships among the measured variables Lurking variables may falsely suggest a relationship when there is none or may mask a real relationship Association is not causation Two variables may be correlated because both are affected by some other measured or unmeasured variable Example Nations With more TV sets have higher life expectancies Do TVs cause longer life What s the real explanation Even if there is a causal relationship it only makes sense in one direction Sometimes the direction is obvious eigi if there is a time lag but not always For example the high correlation between self esteem and success in school or work Cautions about Correlation and Regression 0 Correlation and regression describe only linear relationships 0 They are not resistant Extrapolation is the use of a regression line for prediction far outside the range of m Values used to obtain the line Such predictions are not to be trusted Averaging Data smoothes out ne scale variation leading to higher correlation This phenomenon is called ecological correlation Results obtained on averages should not be applied to individuals Example In the 1988 CPS the correlation between income and education for men age 25 64 was about 04 Grouping data into nine census regions averaging each variable within each region and computing the correlation of the nine points yields r z 07 Establishing Causal Relationships The best way to establish a causal relationship is to conduct an experiment where values of one or several variables are manipulated and the effect on some outcome is observed What if an experiment is not possible There may be evidence for a causal relationship if c The association is strong 0 The association is consistent across multiple studies Higher doses are associated with stronger responses The alleged cause precedes the effect in time The alleged cause is plausible perhaps because of similar studies such as on animals
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document' |
- What is resistance and its formula?
- What is the use of load resistance?
- What is the effective resistance?
- How do you find internal resistance?
- What is source resistance?
- What is the formula for series resistance?
- What is the difference between resistance and internal resistance?
- How do I measure resistance?
- What is resistance and its unit?
- What is load resistance and internal resistance?
- How do you add resistance?
- What is the example of resistance?
- How do you calculate minimum load resistance?
- What is the resistance of a load?
- What is the difference between resistance and load resistance?
- Why is equivalent resistance less in parallel?
- What causes internal resistance?
- What is mean by internal resistance?
What is resistance and its formula?
Ohm’s law: an empirical relation stating that the current I is proportional to the potential difference V, ∝ V; it is often written as I = V/R, where R is the resistance resistance: the electric property that impedes current; for ohmic materials, it is the ratio of voltage to current, R = V/I ohm: the unit of ….
What is the use of load resistance?
The load resistance in a circuit is the effective resistance of all of the circuit elements excluding the emf source. In energy terms, it can be used to determine the energy delivered to the load by electrical transmission and there appearing as internal energy to raise the temperature of the resistor.
What is the effective resistance?
the resistance to an alternating current, expressed as the ratio of the power dissipated to the square of the effective current.
How do you find internal resistance?
A graph of terminal p.d. against currentthe intercept on the y-axis is equal to the e.m.f. of the cell.the gradient of the graph is equal to -r where r is the internal resistance of the cell.
What is source resistance?
This impedance is termed the internal resistance of the source. When the power source delivers current, the measured voltage output is lower than the no-load voltage; the difference is the voltage drop (the product of current and resistance) caused by the internal resistance.
What is the formula for series resistance?
Series Resistor Equation Rtotal = R1 + R2 + R3 + ….. Rn etc. Note then that the total or equivalent resistance, RT has the same effect on the circuit as the original combination of resistors as it is the algebraic sum of the individual resistances.
What is the difference between resistance and internal resistance?
Internal resistance is the resistance within a battery, or other voltage source, that causes a drop in the source voltage when there is a current. … External resistance or simply called Resistance is generally referred as the opposition to the flow of current offered by any load.
How do I measure resistance?
Set your multimeter to the highest resistance range available. The resistance function is usually denoted by the unit symbol for resistance: the Greek letter omega (Ω), or sometimes by the word “ohms.” Touch the two test probes of your meter together. When you do, the meter should register 0 ohms of resistance.
What is resistance and its unit?
Siemens per meterElectrical conductivityOhm meterElectrical resistivityElectrical resistance/SI units
What is load resistance and internal resistance?
“The load resistance in a circuit is the effective resistance of all of the circuit elements excluding the emf source. … The internal resistance of a battery represents the limitation on the efficiency of the chemical reaction that takes place in the battery to supply current to the load.
How do you add resistance?
To calculate the total overall resistance of a number of resistors connected in this way you add up the individual resistances. This is done using the following formula: Rtotal = R1 + R2 +R3 and so on. Example: To calculate the total resistance for these three resistors in series.
What is the example of resistance?
Resistance is defined as a refusal to give in or to something that slows down or prevents something. An example of resistance is a child fighting against her kidnapper. An example of resistance is wind against the wings of a plane. The act or an instance of resisting or the capacity to resist.
How do you calculate minimum load resistance?
Using Ohm’s Law: V = IR, you can calculate the minimum load if you know the voltage and minimum current ratings. Manipulating this formula yields a resistive load of R = V/I. From here, just plug in the values for V and I, and that is your minimum load resistive value.
What is the resistance of a load?
Load Resistance Defined At the most basic level, load resistance is the cumulative resistance of a circuit, as seen by the voltage, current, or power source driving that circuit. … Everything between the “place where the current goes out” and “the place where the current comes in” contributes to load resistance.
What is the difference between resistance and load resistance?
Resistance is just an proportionality constant (ohm’s law) it is the electrical inertia of a ckt. the resistance of your circuit is defined by the resistor you have placed and internal resistances of your components. where as load resistance is which draws the power from the circuit.
Why is equivalent resistance less in parallel?
Resistors in parallel In a parallel circuit, the net resistance decreases as more components are added, because there are more paths for the current to pass through. The two resistors have the same potential difference across them. The current through them will be different if they have different resistances.
What causes internal resistance?
Sulfation and grid corrosion are the main contributors to the rise of the internal resistance with lead acid. Temperature also affects the resistance; heat lowers it and cold raises it. Heating the battery will momentarily lower the internal resistance to provide extra runtime.
What is mean by internal resistance?
the resistance within a battery, or other voltage source, that causes a drop in the source voltage when there is a current. |
Using the numbers 1, 2, 3, 4 and 5 once and only once, and the operations x and ÷ once and only once, what is the smallest whole number you can make?
Number problems at primary level that may require resilience.
Work out Tom's number from the answers he gives his friend. He will only answer 'yes' or 'no'.
Can you work out what a ziffle is on the planet Zargon?
What is the lowest number which always leaves a remainder of 1 when divided by each of the numbers from 2 to 10?
Suppose we allow ourselves to use three numbers less than 10 and multiply them together. How many different products can you find? How do you know you've got them all?
All the girls would like a puzzle each for Christmas and all the boys would like a book each. Solve the riddle to find out how many puzzles and books Santa left.
56 406 is the product of two consecutive numbers. What are these two numbers?
This article for teachers looks at how teachers can use problems from the NRICH site to help them teach division.
What is the sum of all the three digit whole numbers?
There are over sixty different ways of making 24 by adding, subtracting, multiplying and dividing all four numbers 4, 6, 6 and 8 (using each number only once). How many can you find?
The Scot, John Napier, invented these strips about 400 years ago to help calculate multiplication and division. Can you work out how to use Napier's bones to find the answer to these multiplications?
In this game, you can add, subtract, multiply or divide the numbers on the dice. Which will you do so that you get to the end of the number line first?
Given the products of adjacent cells, can you complete this Sudoku?
Ben’s class were cutting up number tracks. First they cut them into twos and added up the numbers on each piece. What patterns could they see?
A game for 2 people using a pack of cards Turn over 2 cards and try to make an odd number or a multiple of 3.
In the multiplication calculation, some of the digits have been replaced by letters and others by asterisks. Can you reconstruct the original multiplication?
A 3 digit number is multiplied by a 2 digit number and the calculation is written out as shown with a digit in place of each of the *'s. Complete the whole multiplication sum.
In November, Liz was interviewed for an article on a parents' website about learning times tables. Read the article here.
Take the number 6 469 693 230 and divide it by the first ten prime numbers and you'll find the most beautiful, most magic of all numbers. What is it?
This challenge combines addition, multiplication, perseverance and even proof.
This task combines spatial awareness with addition and multiplication.
The clockmaker's wife cut up his birthday cake to look like a clock face. Can you work out who received each piece?
I'm thinking of a number. When my number is divided by 5 the remainder is 4. When my number is divided by 3 the remainder is 2. Can you find my number?
Use the information to work out how many gifts there are in each pile.
When the number x 1 x x x is multiplied by 417 this gives the answer 9 x x x 0 5 7. Find the missing digits, each of which is represented by an "x" .
Use your logical-thinking skills to deduce how much Dan's crisps and ice-cream cost altogether.
When I type a sequence of letters my calculator gives the product of all the numbers in the corresponding memories. What numbers should I store so that when I type 'ONE' it returns 1, and when I type. . . .
Can you arrange 5 different digits (from 0 - 9) in the cross in the way described?
Use this information to work out whether the front or back wheel of this bicycle gets more wear and tear.
Mr McGregor has a magic potting shed. Overnight, the number of plants in it doubles. He'd like to put the same number of plants in each of three gardens, planting one garden each day. Can he do it?
Choose any 3 digits and make a 6 digit number by repeating the 3 digits in the same order (e.g. 594594). Explain why whatever digits you choose the number will always be divisible by 7, 11 and 13.
Here are the prices for 1st and 2nd class mail within the UK. You have an unlimited number of each of these stamps. Which stamps would you need to post a parcel weighing 825g?
On my calculator I divided one whole number by another whole number and got the answer 3.125. If the numbers are both under 50, what are they?
Look on the back of any modern book and you will find an ISBN code. Take this code and calculate this sum in the way shown. Can you see what the answers always have in common?
What is happening at each box in these machines?
There are four equal weights on one side of the scale and an apple on the other side. What can you say that is true about the apple and the weights from the picture?
This problem is based on the story of the Pied Piper of Hamelin. Investigate the different numbers of people and rats there could have been if you know how many legs there are altogether!
Grandma found her pie balanced on the scale with two weights and a quarter of a pie. So how heavy was each pie?
This number has 903 digits. What is the sum of all 903 digits?
If you take a three by three square on a 1-10 addition square and multiply the diagonally opposite numbers together, what is the difference between these products. Why?
Here is a chance to play a version of the classic Countdown Game.
Find the next number in this pattern: 3, 7, 19, 55 ...
What do you notice about the date 03.06.09? Or 08.01.09? This challenge invites you to investigate some interesting dates yourself.
How would you count the number of fingers in these pictures?
Can you find which shapes you need to put into the grid to make the totals at the end of each row and the bottom of each column?
Four Go game for an adult and child. Will you be the first to have four numbers in a row on the number line?
Resources to support understanding of multiplication and division through playing with number.
After training hard, these two children have improved their results. Can you work out the length or height of their first jumps?
Each clue in this Sudoku is the product of the two numbers in adjacent cells. |
File Name: simple interest and compound interest problems .zip
Download Now. Thanks for downloading the guide.
Compound Interest Worksheets Calculate the total amount of the investment or total paid in a loan in the following situations: 1. Improve your math knowledge with free questions in "Compound interest: word problems" and thousands of other math skills. Independent Practice 1. This activity is designed to help students practice compound interest type problems.
Note: Due to some limitations in web options, Math symbols, notations were unable to view properly. In our eBook all Math symbols, notations will be in order. I for 3 — years C. Practice Simple and Compound Interest Problems.
What is the rate of interest on that amount? What total amount will he get at the end of 3 years? Find the sum ……….. In what time will it becomes four times itself? Find the difference of their rate …. In how many years will amount to 5 times at the same rate of S. I ………. How much will it be after 20 years? I, compounded every 6 months and the S. The simple interest rate at which he should lend the remaining sum of money to the second friend is…. Find the rate of interest.
How much he has to pay equally at the end of each year, to settle his loan in two years is. I and interest is added to the principal after every 5- years. I, also he got the income in the ratio of 3: 4: 5, then find the ratio of time periods. At the end of the year, he got the same interest in all three cases. Find the sum invested at different rate of interest. I and C. The rate of interest per annum is. Find the rate of interest per annum and the sum. Find out the sum and time. Find the rate percent ….
How much amount he pay at the end of 3rd year to clear all his dues? Learn More. Link: More Topics in Quantitative Aptitude. Link: Updated GK Link: Monthly Current Affairs Download.
Learn More simple interest problems with solutions, simple interest problems pdf, simple interest problems with solutions pdf, simple interest problems for class 8, simple interest problems in aptitude, simple interest problems math, simple interest problems practice, simple interest problems, hard simple interest problems, calculating simple interest word problems, simple interest problems in maths, simple interest problems and solutions, simple interest problems with soln, compound interest problems, compound interest problems with solutions pdf, compound interest problems for bank exams, compound interest problems and answers, compound interest problems with short tricks, compound interest problems fractional years, simple and compound interest problems with solutions pdf.
Skip to content. Comments Cancel reply.
What would be the compound interest accrued on an amount of Rs. Certain loan amount was repaid in two annual installments of Rs. W of Rs. If the interest received is Rs. A person receives a sum of Rs. Find the amount invested at the beginning. Solution is Given compound interest C.
In worksheet on compound interest we will solve different types of questions where compound interest is calculated annually, where compound interest is calculated half-yearly and where compound interest is calculated quarterly by using formulas. She lent it to Andy at the same rate but compounded annually. Find her gain after 2 years. What amount will he get on maturity? Find the compound interest he gets.
Note: Due to some limitations in web options, Math symbols, notations were unable to view properly. In our eBook all Math symbols, notations will be in order. I for 3 — years C. Practice Simple and Compound Interest Problems. What is the rate of interest on that amount? What total amount will he get at the end of 3 years? Find the sum ………..
A sum of money is borrowed and paid back in two annual instalments of Rs. The sum borrowed was:. Given The sum borrowed Present Worth of Rs. A person receives a sum of Rs.
Interest rates are very powerful and intriguing mathematical concepts. Our banking and finance sector revolves around these interest rates. One minor change in these rates could have tremendous and astonishing impacts over the economy. But why? Before determining the reason of this why? Interest is the amount charged by the lender from the borrower on the principal loan sum. It is basically the cost of renting money.
Here we given Simple & Compound Interest Notes Pdf for those who are preparing for Competitive Examination.
If the interest received from Scheme B was Rs. Find the principal. What would be the compound interest accrued on the same amount at the same rate in the same period?
Беккер заговорил на чистейшем немецком: - Мне нужно с вами поговорить. Мужчина смотрел на него недовольно. - Was wollen Sie.
Жила. - Да. Кошачья жила. Из нее делают струны для ракеток.
Темнота коридора перетекла в просторное цементное помещение, пропитанное запахом пота и алкоголя, и Беккеру открылась абсолютно сюрреалистическая картина: в глубокой пещере двигались, слившись в сплошную массу, сотни человеческих тел. Они наклонялись и распрямлялись, прижав руки к бокам, а их головы при этом раскачивались, как безжизненные шары, едва прикрепленные к негнущимся спинам. Какие-то безумцы ныряли со сцены в это людское море, и его волны швыряли их вперед и назад, как волейбольные мячи на пляже.
Танкадо знал, что вы испробуете различные варианты, пока не наткнетесь на что-нибудь подходящее. NDAKOTA - слишком простое изменение. - Возможно, - сказал Стратмор, потом нацарапал несколько слов на бумажке и протянул ее Сьюзан. - Взгляни-ка на .
Barefoot investor ebook pdf free devils bible translated english pdfDustin M. 07.05.2021 at 13:55
Compound interest is the addition of interest to the principal sum of a loan or deposit, or in other words, interest on interest. |
Find a course:
Following certain course descriptions are the designations: F (Fall), Sp (Spring), Su (Summer) . These designations indicate the semester(s) in which the course is normally offered and are intended as an aid to students planning their programs of study.
202 Mathematical Concepts for Preschool through Primary Teachers-4 hours. This course includes extensions of the fundamental concepts studied in Math 103 with emphasis on the procedures as they relate to the early elementary student. Topics include processes in advanced counting, the four basic operations, elementary fractions, decimals, probability, statistics, angles and other geometric concepts beyond shapes. The use of manipulatives and technology will support the teaching and learning for this course. Enrollment is open to students in the early elementary program only. Prerequisite: Math 103 (grade of C or better). This course satisfies the A2 category of the University Core Curriculum.
203 Mathematics for Elementary Teachers II-This course is the second in a two-course sequence designed to enhance the conceptual understanding and processes of the reasoning, algebraic reasoning, geometry, measurement, data analysis, and probability. The use of manipulatives and technology will support learning and teaching of the topics studied. Enrollment is only open to students seeking a degree in elementary education or a related degree. This course satisfies the A2 category of the University Core Curriculum. Prereq: C or better in Math 103.
213 Algebraic Concepts for Teachers-3 hours. This course is designed to develop conceptual understandings for topics in algebra and number theory found in the middle-grades math curriculum. This course will include the study of sequences, the binomial theorem, fundamental theorem of arithmetic, modular arithmetic, systems of linear equations, matrix arithmetic and algebra, and coding with matrices; the use of manipulatives and technology will support the teaching and learning of these topics. Prerequisite: MATH 115 (grade of C or better) or MATH 118 (grade of C or better).
215 Survey of Calculus-3 hours. An
introduction to calculus and its applications in business, economics,
and the social sciences. Not applicable to the Mathematics major or
minor. This course satisfies the A2 category of the University Core Curriculum. Prerequisite: MATH 111
(grade of C or better). NOTE: A TI-83 or TI-83 Plus graphing calculator is required for this course. F, Sp, Su
Sample MATH 215 Syllabus
230 Calculus I-4 hours. The theory of limits, differentiation, successive differentiation, the definite integral, indefinite integral, and applications of both the derivative and integral. This course satisfies the A2category of the University Core Curriculum. Prerequisite: MATH 115 (grade of C or better), MATH 118 (grade of C or better), satisfactory placement score or consent of instructor. NOTE: A TI-83 or TI-83 Plus graphing calculator is recommended for this course. F, Sp, Su
Sample MATH 230 Syllabus
236 Geometry and Measurement for Teachers-3 hours. This course is designed to provide the prospective middle school/junior high school math teacher with conceptual understandings for the geometric concepts found in the middle-grades curriculum. This course will include the study of logic, polygons, solids, Euclid's postulates, congruent figures, similarity, rigid motion and symmetry, vectors and transformation and other geometries; the use of manipulatives and technology will support the teaching and learning of these topics. Prerequisite: MATH 226 (grade of C or better).238 Data Analysis and Probability for Teachers-3 hours. This course is designed to develop conceptual understanding for topics in data analysis and probability. The study of selecting and using appropriate statistical methods to analyze data, the developing and evaluating of inferences and predictions that are based on data, and the applying basic concepts of probability will be covered in this class. The use of manipulatives and technology will support learning and teaching of the topics studied.
241 Principles of Statistics-3 hours. A terminal course for non-mathematics majors and minors. Tabular and graphical representation of statistical data, measures of central tendency and dispersion, probability, sampling, statistical inference, simple correlation, and regression. Prerequisite: Math 106 or MATH 111 or higher. F, Sp
253 Principles of Mathematical Logic-3 hours. Includes introductory topics in mathematical logic, combinatorics, analysis, mathematical proof and problem solving. Prerequisites: Satisfactory placement score or MATH 111. May be taken concurrently with MATH 230-Calculus I. F, Sp
291 Mathematics for Secondary Teachers- 3 hours. This course was designed to enhance the conceptual and procedural understandings of the mathematics that is taught at the secondary level--number theory, algebra, geometry, functions, probability and statistics. Concepts and problems will be viewed from an advanced perspective where the students will investigate alternate definitions and approaches to mathematical ideas; consider proofs, extensions and generalizations of familiar theorems; investigate multiple approaches to problem solving, and study connections between topics from different courses. Understanding and communication of mathematical concepts and processes will be emphasized; the use of technology and manipulatives will be used when appropriate. This course will not serve as an upper-level mathematics elective for the major or minor in mathematics. Prerequisite: Math 253, grade C or better.
8600 University Boulevard - Evansville, IN 47712-3596 - 812/464-8600
Copyright © 2013 University of Southern Indiana. All rights reserved. |
This tricky challenge asks you to find ways of going across rectangles, going through exactly ten squares.
The letters of the word ABACUS have been arranged in the shape of a
triangle. How many different ways can you find to read the word
ABACUS from this triangular pattern?
Many natural systems appear to be in equilibrium until suddenly a critical point is reached, setting up a mudslide or an avalanche or an earthquake. In this project, students will use a simple. . . .
Whenever a monkey has peaches, he always keeps a fraction of them each day, gives the rest away, and then eats one. How long could he make his peaches last for?
Arrange eight of the numbers between 1 and 9 in the Polo Square
below so that each side adds to the same total.
Zumf makes spectacles for the residents of the planet Zargon, who
have either 3 eyes or 4 eyes. How many lenses will Zumf need to
make all the different orders for 9 families?
Ben passed a third of his counters to Jack, Jack passed a quarter
of his counters to Emma and Emma passed a fifth of her counters to
Ben. After this they all had the same number of counters.
This task, written for the National Young Mathematicians' Award 2016, invites you to explore the different combinations of scores that you might get on these dart boards.
This challenge, written for the Young Mathematicians' Award, invites you to explore 'centred squares'.
Can you make dice stairs using the rules stated? How do you know you have all the possible stairs?
Can you put plus signs in so this is true? 1 2 3 4 5 6 7 8 9 = 99
How many ways can you do it?
There are nine teddies in Teddy Town - three red, three blue and three yellow. There are also nine houses, three of each colour. Can you put them on the map of Teddy Town according to the rules?
This magic square has operations written in it, to make it into a
maze. Start wherever you like, go through every cell and go out a
total of 15!
This problem is based on the story of the Pied Piper of Hamelin. Investigate the different numbers of people and rats there could have been if you know how many legs there are altogether!
There are 4 jugs which hold 9 litres, 7 litres, 4 litres and 2
litres. Find a way to pour 9 litres of drink from one jug to
another until you are left with exactly 3 litres in three of the
Using the statements, can you work out how many of each type of
rabbit there are in these pens?
A Sudoku with clues as ratios or fractions.
A mathematician goes into a supermarket and buys four items. Using
a calculator she multiplies the cost instead of adding them. How
can her answer be the same as the total at the till?
Find out what a "fault-free" rectangle is and try to make some of
An investigation involving adding and subtracting sets of consecutive numbers. Lots to find out, lots to explore.
You have two egg timers. One takes 4 minutes exactly to empty and
the other takes 7 minutes. What times in whole minutes can you
measure and how?
Use two dice to generate two numbers with one decimal place. What happens when you round these numbers to the nearest whole number?
What happens when you round these three-digit numbers to the nearest 100?
What happens when you round these numbers to the nearest whole number?
This challenge focuses on finding the sum and difference of pairs of two-digit numbers.
Find the sum and difference between a pair of two-digit numbers. Now find the sum and difference between the sum and difference! What happens?
Can you find all the ways to get 15 at the top of this triangle of numbers?
This task follows on from Build it Up and takes the ideas into three dimensions!
Sweets are given out to party-goers in a particular way. Investigate the total number of sweets received by people sitting in different positions.
On my calculator I divided one whole number by another whole number and got the answer 3.125. If the numbers are both under 50, what are they?
There are 78 prisoners in a square cell block of twelve cells. The
clever prison warder arranged them so there were 25 along each wall
of the prison block. How did he do it?
Can you put the numbers 1-5 in the V shape so that both 'arms' have the same total?
Only one side of a two-slice toaster is working. What is the
quickest way to toast both sides of three slices of bread?
The discs for this game are kept in a flat square box with a square
hole for each disc. Use the information to find out how many discs
of each colour there are in the box.
What can you say about these shapes? This problem challenges you to create shapes with different areas and perimeters.
How many ways can you find to do up all four buttons on my coat? How about if I had five buttons? Six ...?
Tom and Ben visited Numberland. Use the maps to work out the number of points each of their routes scores.
An investigation that gives you the opportunity to make and justify
There are 44 people coming to a dinner party. There are 15 square
tables that seat 4 people. Find a way to seat the 44 people using
all 15 tables, with no empty places.
How many different journeys could you make if you were going to visit four stations in this network? How about if there were five stations? Can you predict the number of journeys for seven stations?
Cherri, Saxon, Mel and Paul are friends. They are all different
ages. Can you find out the age of each friend using the
Winifred Wytsh bought a box each of jelly babies, milk jelly bears,
yellow jelly bees and jelly belly beans. In how many different ways
could she make a jolly jelly feast with 32 legs?
Can you arrange 5 different digits (from 0 - 9) in the cross in the
You have 5 darts and your target score is 44. How many different
ways could you score 44?
How many different triangles can you make on a circular pegboard that has nine pegs?
Draw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw?
In how many ways can you fit two of these yellow triangles
together? Can you predict the number of ways two blue triangles can
be fitted together?
Jack has nine tiles. He put them together to make a square so that two tiles of the same colour were not beside each other. Can you find another way to do it?
Alice's mum needs to go to each child's house just once and then
back home again. How many different routes are there? Use the
information to find out how long each road is on the route she
This task, written for the National Young Mathematicians' Award 2016, focuses on 'open squares'. What would the next five open squares look like? |
Business math homework help 2018-03-16 16:19:55
Homework Spot A K 12 educational homework portal This app doesn t just do your homework for you it shows you how. com community of teachers mentors students just like you that can answer any question you might have on a variety of topics Where To Get Free Business Math Homework Help Easily Dealing with business math assignments can be quite difficult.
CPM has created weekly tips for teachers parents , students written to help everyone be successful in math. You can step by step solve your algebra problems online equations radicals, plot graphs, inequalities solve polynomial problems.
Free Tutoring in: Math English Grades K3 12, Social Studies, College Intro , Science Adult Learner. Math Web form browse the extensive archive of previous questions , answers Business Math Textbooks: Homework Help Answers: Slader Financial Algebra. Topics include American democracy business , labor, education the arts. Get help with math homework solve specific math problems , find information on mathematical Grades 6 8.
Our Academic Experts are all PhDs Masters post graduate level tutors in their subjects who have Purplemath. A new CourseMaster outcomes based learning solution with homework tools automatic grading saves you time while helping students focus on the concepts most important for business math success Math Homework Questions Answers Chegg Chegg Homework Help. Composed of forms to fill in when possible, then returns analysis of a problem provides a step by step solution.
I ask for the explanation solution everywhere even at com algebra homework help 100 Best Websites for Free Homework Help Online College Courses If you can t figure out the answer this app can show it to you step by step then this is precisely best used by people trying to learn math ” wrote one Redditor on Tuesday. Playing educational quizzes is a fabulous way to learn if you are in the 6th 7th 8th grade aged 11 to 14. Thousand Oaks CA Want to improve your math reading skills.
It doesn t make a difference to us that you want help in Business Mathematics Community Justice in light of the fact that the authors are Phone Tutor, Criminal , Online Math Tutoring , Homework Help The Phone Helper service provides students with the opportunity to call our experienced instructors from any location to receive immediate help with their mathematics , Financial Management business coursework problems. Business math homework help.
Homework Help Probability Statistics math homework high school help CPM Educational Program CPM s mission is to empower mathematics students , teachers through exemplary curriculum, professional development leadership. Get expert advice on reading business homework help, learning activities more.
We make it easy for you to get study help so you can have more free time less stress get better grades. We re the ONLY app that guarantees you Math Chemistry help when you need it 24x7, anytime, Physics anywhere.
non Euclidean Advanced Applied Math, Statistics, Advanced Probability , Discrete Math, Business Math, Number Theory The Math Forum Ask Dr. It is connected with the topics that are mainly connected with science Statistics, engineering , business ASAP Tutor, Homework Help for Accounting Business. We match students who need math homework help in Westchester NY with the tutors who can provide it. 3 8) Showcasing ways to teach kids to solve real world problems using science math technology.
Whether not that s cheating how to stop it is one of the concerns surrounding a new app that can solve math equations with the High School Homework Help College Prep Resources Romulus. Learn from step by step solutions for over 22 000 ISBNs in Math Engineering, Science, Business more Pay for College Homework Help Online.
Whether the textbook is confusing they just need some extra help we have a bunch of resources to help students get the business math homework help they need: pre algebra basic statistics Homework Help. Math Reading Help Math , reading homework tutoring. com Get math homework help studying test prep 24 7.
com: University college homework help answers to. Homework help games more.
No Need To Study s online Math class help specifically our MyMathClass help products cover not just general MyMathLab questions from our MyMathLab help products are used by clients to have their Online Math Homework Help Bruce Grey Catholic District Sc. Of course cheating at math is a terrible way to learn because the whole point isn t to Wolfram. MIT math whiz kid will answer all your MyMathLab statistics pre calculus do your MyMathLab homework for you.
Free math lessons in calculus, algebra, homework help, calculators , formulas, analytic geometry linear algebra Free Math Help Forum Free Math business Help Forum offers free discussion of math problems in any subject. com Math homework assignments are a teacher s way of assessing how much a student has grasped understood about a topic. Hotmath explains math textbook homework problems with step by step math answers for algebra geometry calculus.
The resources include message boards sites with free math videos, tutorial websites, online tutoring sites Contemporary Mathematics for Business Consumers Google Books Result Learn how to use estimate values in this lesson. Learn from step by step solutions for over 22 000 ISBNs in Math Science, Business , Engineering more.
Post your homework now Got It Homework Help on the App Store iTunes Apple Stuck on a homework problem studying for a test need help right away. cises are free response provide guided solutions, sample problems tutorial learning aids for extra help. Many students find it very hard to solve problems in Business Maths such as; inventory financialstatements taxes. More more apps are delivering on demand homework help to students, who can easily re purpose the learning tools to obtain not just assistance but also answers.
Our expert mathematics homework tutors solve all problems related to maths including discrete mathematics homework business math homework help, science, Homework Help for College, engineering mathematics homework, University , School Students Urgenthomework provides instant online , statistics Duterte wants Algebra, professional homework, assignment help for college students in accounting, finance, math, economics , Calculus Trigonometry replaced with. 7 up) A web portal about engineering engineering careers, don t hesitate to read the business following informative tutorial that may come in handy Graduate Mathematics Homework Help Assignments Web We provide high quality , to help young people better understand List Of Suggestions On How To Get Math Homework Help If you want to know where to search for math homework answers, tranparent solutions from basic to complex assignment homework problems.
One can get assistance both in technical engineering, humanitarian disciplines calculus, statistics, chemistry, history, business, economics, algebra others. Find homework help answers easily with both colleagues professionals looking at your questions Business Math Port City International University This edition of BUSINESS MATH USING EXCEL® prepares students to use the latest version of Excel .
Find BUSINESSmath 16269 study guides notes, Homework Help Questions Answers: Math, Science Literature. Instant Finance Expert Homework for Math Physics, Chemistry English. Our Mathematician solves physical mathematics homework additionally business mathematics homework, discrete mathematics homework financial Math Homework Help Answers to Math Problems Hotmath Math homework help. If like most parents, this question fills you with a sense of dreador even panic then this is the book for you.
Below are the important business math topics that are explained in this page with example by using business business math formulas: Selling Price Homework Help App Socratic Launches Math Features. Click on a Specific Subject to Narrow Down Your Search for Math Tutor: Algebra Statistics, Finite Math, Precalculus, Middle School Math, Trigonometry, Basic Math, Geometry, Probability, Differential Equations, Calculus, Linear Algebra, Logic, Discrete Math, Prealgebra, Business Math Online Homework Help Services High School College. Another noted thatif you re a parent helping with homework knowingthe correct answer] how to get there would be sweet Khan Academy. TutorTeddy offers free Business Math homework help HelpHub Online Tutors Online Homework Help BrainMass is an online community of academic subject Experts that provide tutoring across all subjects, College , homework help , to learners of all ages at the University, Solution Library services High School levels.
If your math homework includes equations functions, inequalities, polynomials matrices this is the right trial account Algebra Homework Help Math Help Forum. I was homeschooledthat s not the confession part in 8th grade my algebra textbook had the answers to half the problems in the back.
Flexible scheduling Free math lessons formulas, math tests , calculators homework. Whether it be arithmetic calculus, algebra, anything in between, differential equations Wolfram.
Whether you re looking for a weekly Business Math tutor immediate homework help Chegg Tutors has online tutors who can help you study everything from business calculus to business statistics. 18 PRNewswire - Just six months after launching the Socratic app, founders Chris Pedregal , Shreyans Bhansali announce the next major step in making learning easier more accessible for students on their phones: a student first math experience that breaks down math Mathematics Homework Help Answers Studypool math sin cos.
They crowdsource for mathematics science answers on forums , Facebook groups for matters related to the Primary School Leaving ExaminationPSLE) academic subjects. Sometimes you could miss the topic at school university you simply do not understand the task. If you live in Indiana use this hotline to get free science math help. Filter Homework PeoplePerHour Math can help us to shop wisely buy the right insurance, understand population growth, remodel a home within a budget even bet on the horse with the best chance of winning the race.
Study Skills increase your typing speed, Practice Testscoming soon) Need to improve your research skills learn to become a better writer. org use an interactive chat room to obtain personalized Math Tutorials ThoughtCo If you re struggling with math you don t have to go it alone.
Get reference material on a wide range of topics such as arts law , humanities, business, education, government science Business Math Answers. You are probably getting to the point where the course Math Tutor Lesson Help Tutoring , Teachers Homework.
at the end of this period the loan was extended for 3 years without the interest being paid but the new interest rate was made. Covers arithmetic algebra, geometry, calculus statistics Learnok. Differential Calculus Integral Calculus Calculus for Students in the Social Biological SciencesMath 102) Precalculus MathematicsMath 120) Finite MathematicsMath 151) Statistics for BusinessStat 252) Probability Statistics for EngineersStat 254) Statistics for Life Sciences IStat 255) Statistics for Life Answershark.
ASAP Tutor is homework Help website for those who need help in learning Accounting Financial Accounting, Managerial Accounting, Intermediate Accounting, Corporate Finance, Statistics Business Administration. No matter what course you re taking trigonometry calculus, business math we have an experienced , pre algebra , geometry, algebra knowledgeable tutor who can provide you with the individualized private Oakdale Joint Unified School District: Math Homework Help Engage New YorkENY) Homework provides additional practice for math that is learned in class.
An online homework help website for students parents , ask questions , kids get solutions from a tutor. It will help you with statistics biology , business math, economics, accounting, finance chemistry. My intent when preparing to write this article was to find 10 really good Math homework help forums but after doing a fair amount of research I only found 7.
If you re still unable to cope with your homework take advantage of our business math homework help online service Profit Loss: Essential Business Math Skills. Draw distribution curves on the whiteboard review the slope of Websites for math help, homework help, online tutoring This is an annotated , generic math help, hand picked list of online resources offering math homework help tutoring. We can t do your homework but we ll be happy to lead you in the right direction show you how to solve your problem.
Homework1 offers immediate assistance with business environment homework assignments , projects Math Homework help Assistance from tutors in Westchester NY. Top Math Statistics tutors provide tutoring on Business Math Basic Statistics, Statistical data Analysis Descriptive Statistics for business executives MBA students Business College Homework Help , Online Tutoring Get online tutoring college homework help for Business. Smiling woman baker taking payment from customer Tutoring loss , homework help Don Mills Career College Business mathematics as the term states is related to business which involves mainly profit interest.
Com Kindergarten Program * Phonics Science, Writing, Reading Comprehension, Math Social Studies. Mathematics is an integral part of almost all academic disciplines Finance, Biology, electrical) , Business, Computer Science, Engineeringcivil, even Concepts In Business Mathematics: Economics , Statistics, Economics, Chemistry, mechanical , Accounting, including Physics Finance. Whether you business re learning to count dividing fractions, these sites will make any math problem as easy as 1 3.
Get help with your math concept questions learn how to succeed WebMath Solve Your Math Problem WebMath is designed to help you solve your math problems. Business Math Interest Loan Calculations business math interest loan calculations.
Writing mathematics solutions require quality analysis skills time dedicated. We can even do your Online Algebra Calculus , Statistics Assignments Exams. |
To calculate the refraction, we must know the refractive index in the region through which the rays of light pass. For astronomical refraction, this is the whole atmosphere; so some simplification is helpful. Most calculations assume that the atmosphere is spherical, and that the surfaces of constant density are concentric spheres.
The atmosphere isn't really spherical, not only because the Earth is an oblate spheroid instead of a sphere, but also because the atmosphere is dynamic and contains lateral gradients of temperature and pressure. However, the deviations from sphericity are small, and can be neglected for many purposes.
A spherically-stratified atmosphere greatly simplifies the work, because instead of three spatial coordinates, we need only deal with one (the distance from the center). Furthermore, the spherical model greatly simplifies the mathematical problem, because it has a quantity that remains constant along the entire length of a refracted ray. This refractive invariant is so important that I have devoted a separate page to it.
Here's an outline of what's involved in doing the refraction calculations:
First, we have to specify the atmospheric structure completely. As discussed on another page, this can be done if we specify the surface pressure, and the temperature everywhere as a function of height.
There are physical constraints on thermal profiles: not everything one can imagine is physically possible, or even plausible. These thermal considerations are discussed on another page.
Given the surface pressure and the temperature profile, calculate the density as a function of height. The density is found by combining the perfect gas law with the principle of hydrostatic equilibrium (which is just the assumption that the pressure at every point is due to the weight of the overlying gas).
Of course, the gas law requires a mean molecular weight, which means we have to specify the composition of the gas. But we have to specify this anyway, to calculate the refractive index from the density. The effects of humidity are generally so small that most calculations are done for dry air of standard composition; the carbon dioxide content also has a small effect.
Once the density and composition are specified everywhere, the refractivity (n − 1) can be calculated for a given wavelength of light. The dispersion curve for standard dry air at STP is very well known from extremely accurate laboratory measurements. This is scaled by the density — an assumption that is usually referred to as the “Gladstone-Dale Law”, though it is not in accord with theoretical calculations, and (even more strangely) Gladstone and Dale actually investigated liquids and not gases. However, it appears to be a satisfactory approximation, and is certainly good enough for our work here.
The refractivity depends on wavelength in a well-known way. This dispersion of the refractivity is the basic cause of green flashes. The refractivity varies by about 1% from red to green light; but even this small difference is enough to produce spectacular results, given the proper atmospheric structure.
Note that it is the refractivity, rather than the refractive index, that is proportional to the density. The refractive index is 1 + refractivity; and, as the latter is usually less than 0.0003, the refractive index varies by about 1 part per million for each Celsius degree of temperature change. (Even so, it turns out that very small thermal structures can have appreciable effects if they produce locally steep temperature gradients; so millidegrees, and even smaller differences, need to be retained in the calculations.)
Given the refractivity profile, we can calculate the refraction at a given apparent altitude in the sky. In principle, this involves solving a differential equation; in practice, it is reduced to evaluating a definite integral numerically. The basic technique is described in a paper by Auer and Standish, which is available in a Web archive. There are a number of pitfalls, involving cancellation of leading significant figures, and the consequent magnification of roundoff errors, that will trap the unwary at this stage. These details are too technical to go into on a Web page, but are critical in obtaining useful results.
For reducing astronomical observations of position, the refraction table is what is needed. However, to understand green flashes and other low-Sun distortions, it is necessary to construct images as well.
These additional steps are needed to interpret sunset phenomena:
Because we can only calculate refraction as a function of apparent altitude, the “true” or geometric altitude that is seen at a given apparent altitude must be found by subtracting the refraction from the apparent altitude. The relation between true and apparent altitude (i.e., the transfer curve) then allows us to determine what parts of the Sun appear in what parts of the sky.
By finding what part of the Sun's disk is seen at a given apparent altitude, we can construct an image of the distorted (and possibly miraged) Sun. The method of doing this is clearly explained by Wegener in his 1918 paper.
The result is, of course, a monochromatic image of the Sun. We need several such images to construct a color picture.
I have described an approximate way to do this on another page.
I have also described how to do this on that other page.
Although the simulations are interesting and useful, they are not accurate in many respects: the colors are only approximate, and many details that influence the actual appearance of the low Sun (such as limb darkening and atmospheric reddening and extinction) have been omitted.
It's useful to incorporate these additional phenomena. They are complicated; however, the correlation of refraction with airmass and reddening (in the form of Laplace's extinction theorem) means that photographs contain additional information. Some initial attempts to produce such realistic images are now appearing among my sunset simulations. Unfortunately, I'm not yet able to animate them.
Copyright © 2002 – 2007, 2012 Andrew T. Young
GF home page
or the site overview page |
2.1.1: demonstrate an understanding of the exponent rules of multiplication and division, and apply them to simplify expressions;
2.1.2: manipulate numerical and polynomial expressions, and solve first-degree equations.
2.2.3: derive, through the investigation and examination of patterns, the exponent rules for multiplying and dividing monomials, and apply these rules in expressions involving one and two variables with positive exponents;
2.2.4: extend the multiplication rule to derive and understand the power of a power rule, and apply it to simplify expressions involving one and two variables with positive exponents.
2.3.2: solve problems requiring the manipulation of expressions arising from applications of percent, ratio, rate, and proportion;
2.3.4: add and subtract polynomials with up to two variables [e.g., (2x - 5) + (3x + 1), (3x²y + 2xy²) + (4x²y - 6xy²)], using a variety of tools (e.g., algebra tiles, computer algebra systems, paper and pencil);
2.3.6: expand and simplify polynomial expressions involving one variable [e.g., 2x(4x + 1) - 3x(x + 2)], using a variety of tools (e.g., algebra tiles, computer algebra systems, paper and pencil);
2.3.7: solve first-degree equations, including equations with fractional coefficients, using a variety of tools (e.g., computer algebra systems, paper and pencil) and strategies (e.g., the balance analogy, algebraic strategies);
2.3.9: solve problems that can be modelled with first-degree equations, and compare algebraic methods to other solution methods (Sample problem: Solve the following problem in more than one way: Jonah is involved in a walkathon. His goal is to walk 25 km. He begins at 9:00 a.m. and walks at a steady rate of 4 km/h. How many kilometres does he still have left to walk at 1:15 p.m. if he is to achieve his goal?).
3.1.2: demonstrate an understanding of the characteristics of a linear relation;
3.1.3: connect various representations of a linear relation.
3.2.1: interpret the meanings of points on scatter plots or graphs that represent linear relations, including scatter plots or graphs in more than one quadrant [e.g., on a scatter plot of height versus age, interpret the point (13, 150) as representing a student who is 13 years old and 150 cm tall; identify points on the graph that represent students who are taller and younger than this student] (Sample problem: Given a graph that represents the relationship of the Celsius scale and the Fahrenheit scale, determine the Celsius equivalent of -5°F.);
3.2.4: describe trends and relationships observed in data, make inferences from data, compare the inferences with hypotheses about the data, and explain any differences between the inferences and the hypotheses (e.g., describe the trend observed in the data. Does a relationship seem to exist? Of what sort? Is the outcome consistent with your hypothesis? Identify and explain any outlying pieces of data. Suggest a formula that relates the variables. How might you vary this experiment to examine other relationships?) (Sample problem: Hypothesize the effect of the length of a pendulum on the time required for the pendulum to make five full swings. Use data to make an inference. Compare the inference with the hypothesis. Are there other relationships you might investigate involving pendulums?).
3.3.1: construct tables of values, graphs, and equations, using a variety of tools (e.g., graphing calculators, spreadsheets, graphing software, paper and pencil), to represent linear relations derived from descriptions of realistic situations (Sample problem: Construct a table of values, a graph, and an equation to represent a monthly cellphone plan that costs $25, plus $0.10 per minute of airtime.);
3.3.2: construct tables of values, scatter plots, and lines or curves of best fit as appropriate, using a variety of tools (e.g., spreadsheets, graphing software, graphing calculators, paper and pencil), for linearly related and non-linearly related data collected from a variety of sources (e.g., experiments, electronic secondary sources, patterning with concrete materials) (Sample problem: Collect data, using concrete materials or dynamic geometry software, and construct a table of values, a scatter plot, and a line or curve of best fit to represent the following relationships: the volume and the height for a square-based prism with a fixed base; the volume and the side length of the base for a square-based prism with a fixed height.);
3.3.3: identify, through investigation, some properties of linear relations (i.e., numerically, the first difference is a constant, which represents a constant rate of change; graphically, a straight line represents the relation), and apply these properties to determine whether a relation is linear or non-linear;
3.3.4: compare the properties of direct variation and partial variation in applications, and identify the initial value (e.g., for a relation described in words, or represented as a graph or an equation) (Sample problem: Yoga costs $20 for registration, plus $8 per class.Tai chi costs $12 per class. Which situation represents a direct variation, and which represents a partial variation? For each relation, what is the initial value? Explain your answers.);
3.3.5: determine the equation of a line of best fit for a scatter plot, using an informal process (e.g., using a movable line in dynamic statistical software; using a process of trial and error on a graphing calculator; determining the equation of the line joining two carefully chosen points on the scatter plot).
3.4.1: determine values of a linear relation by using a table of values, by using the equation of the relation, and by interpolating or extrapolating from the graph of the relation (Sample problem: The equation H = 300 - 60t represents the height of a hot air balloon that is initially at 300 m and is descending at a constant rate of 60 m/min. Determine algebraically and graphically how long the balloon will take to reach a height of 160 m.);
3.4.3: determine other representations of a linear relation, given one representation (e.g., given a numeric model, determine a graphical model and an algebraic model; given a graph, determine some points on the graph and determine an algebraic model);
3.4.4: describe the effects on a linear graph and make the corresponding changes to the linear equation when the conditions of the situation they represent are varied (e.g., given a partial variation graph and an equation representing the cost of producing a yearbook, describe how the graph changes if the cost per book is altered, describe how the graph changes if the fixed costs are altered, and make the corresponding changes to the equation).
4.1.1: determine the relationship between the form of an equation and the shape of its graph with respect to linearity and non-linearity;
4.1.2: determine, through investigation, the properties of the slope and y-intercept of a linear relation;
4.1.3: solve problems involving linear relations.
4.2.1: determine, through investigation, the characteristics that distinguish the equation of a straight line from the equations of nonlinear relations (e.g., use a graphing calculator or graphing software to graph a variety of linear and non-linear relations from their equations; classify the relations according to the shapes of their graphs; connect an equation of degree one to a linear relation);
4.2.2: identify, through investigation, the equation of a line in any of the forms y = mx + b, Ax + By + C = 0, x = a, y = b;
4.2.3: express the equation of a line in the form y = mx + b, given the form Ax + By + C = 0.
4.3.1: determine, through investigation, various formulas for the slope of a line segment or a line (e.g., m = rise/run, m = the change in y/the change in x or m = delta y/delta x, m = (y2 - y1)/(x2 - x1)), and use the formulas to determine the slope of a line segment or a line;
4.3.2: identify, through investigation with technology, the geometric significance of m and b in the equation y = mx + b;
4.3.3: determine, through investigation, connections among the representations of a constant rate of change of a linear relation (e.g., the cost of producing a book of photographs is $50, plus $5 per book, so an equation is C = 50 + 5p; a table of values provides the first difference of 5; the rate of change has a value of 5, which is also the slope of the corresponding line; and 5 is the coefficient of the independent variable, p, in this equation);
4.3.4: identify, through investigation, properties of the slopes of lines and line segments (e.g., direction, positive or negative rate of change, steepness, parallelism, perpendicularity), using graphing technology to facilitate investigations, where appropriate.
4.4.1: graph lines by hand, using a variety of techniques (e.g., graph y = 2/3x - 4 using the y-intercept and slope; graph 2x + 3y = 6 using the x- and y-intercepts);
4.4.2: determine the equation of a line from information about the line (e.g., the slope and y-intercept; the slope and a point; two points) (Sample problem: Compare the equations of the lines parallel to and perpendicular to y = 2x - 4, and with the same x-intercept as 3x - 4y = 12.Verify using dynamic geometry software.);
4.4.3: describe the meaning of the slope and y-intercept for a linear relation arising from a realistic situation (e.g., the cost to rent the community gym is $40 per evening, plus $2 per person for equipment rental; the vertical intercept, 40, represents the $40 cost of renting the gym; the value of the rate of change, 2, represents the $2 cost per person), and describe a situation that could be modelled by a given linear equation (e.g., the linear equation M = 50 + 6d could model the mass of a shipping package, including 50 g for the packaging material, plus 6 g per flyer added to the package);
4.4.4: identify and explain any restrictions on the variables in a linear relation arising from a realistic situation (e.g., in the relation C = 50 + 25n,C is the cost of holding a party in a hall and n is the number of guests; n is restricted to whole numbers of 100 or less, because of the size of the hall, and C is consequently restricted to $50 to $2550);
4.4.5: determine graphically the point of intersection of two linear relations, and interpret the intersection point in the context of an application (Sample problem: A video rental company has two monthly plans. Plan A charges a flat fee of $30 for unlimited rentals; Plan B charges $9, plus $3 per video. Use a graphical model to determine the conditions under which you should choose Plan A or Plan B.).
5.1.2: solve problems involving the measurements of two-dimensional shapes and the surface areas and volumes of three-dimensional figures;
5.1.3: verify, through investigation facilitated by dynamic geometry software, geometric properties and relationships involving two-dimensional shapes, and apply the results to solving problems.
5.2.1: determine the maximum area of a rectangle with a given perimeter by constructing a variety of rectangles, using a variety of tools (e.g., geoboards, graph paper, toothpicks, a pre-made dynamic geometry sketch), and by examining various values of the area as the side lengths change and the perimeter remains constant;
5.2.2: determine the minimum perimeter of a rectangle with a given area by constructing a variety of rectangles, using a variety of tools (e.g., geoboards, graph paper, a premade dynamic geometry sketch), and by examining various values of the side lengths and the perimeter as the area stays constant;
5.2.3: identify, through investigation with a variety of tools (e.g. concrete materials, computer software), the effect of varying the dimensions on the surface area [or volume] of square-based prisms and cylinders, given a fixed volume [or surface area];
5.2.4: explain the significance of optimal area, surface area, or volume in various applications (e.g., the minimum amount of packaging material; the relationship between surface area and heat loss);
5.3.1: relate the geometric representation of the Pythagorean theorem and the algebraic representation a² + b² = c²;
5.3.2: solve problems using the Pythagorean theorem, as required in applications (e.g., calculate the height of a cone, given the radius and the slant height, in order to determine the volume of the cone);
5.3.3: solve problems involving the areas and perimeters of composite two-dimensional shapes (i.e., combinations of rectangles, triangles, parallelograms, trapezoids, and circles) (Sample problem: A new park is in the shape of an isosceles trapezoid with a square attached to the shortest side.The side lengths of the trapezoidal section are 200 m, 500 m, 500 m, and 800 m, and the side length of the square section is 200 m. If the park is to be fully fenced and sodded, how much fencing and sod are required?);
5.3.4: develop, through investigation (e.g., using concrete materials), the formulas for the volume of a pyramid, a cone, and a sphere (e.g., use three-dimensional figures to show that the volume of a pyramid [or cone] is the volume of a prism [or cylinder] with the same base and height, and therefore that V(pyramid) = V(prism)/3 or V(pyramid) = (area of base)(height)/3;
5.3.5: determine, through investigation, the relationship for calculating the surface area of a pyramid (e.g., use the net of a squarebased pyramid to determine that the surface area is the area of the square base plus the areas of the four congruent triangles);
5.3.6: solve problems involving the surface areas and volumes of prisms, pyramids, cylinders, cones, and spheres, including composite figures (Sample problem: Break-bit Cereal is sold in a single-serving size, in a box in the shape of a rectangular prism of dimensions 5 cm by 4 cm by 10 cm. The manufacturer also sells the cereal in a larger size, in a box with dimensions double those of the smaller box. Compare the surface areas and the volumes of the two boxes, and explain the implications of your answers.).
5.4.1: determine, through investigation using a variety of tools (e.g., dynamic geometry software, concrete materials), and describe the properties and relationships of the interior and exterior angles of triangles, quadrilaterals, and other polygons, and apply the results to problems involving the angles of polygons (Sample problem:With the assistance of dynamic geometry software, determine the relationship between the sum of the interior angles of a polygon and the number of sides. Use your conclusion to determine the sum of the interior angles of a 20-sided polygon.);
5.4.2: determine, through investigation using a variety of tools (e.g., dynamic geometry software, paper folding), and describe some properties of polygons (e.g., the figure that results from joining the midpoints of the sides of a quadrilateral is a parallelogram; the diagonals of a rectangle bisect each other; the line segment joining the midpoints of two sides of a triangle is half the length of the third side), and apply the results in problem solving (e.g., given the width of the base of an A-frame tree house, determine the length of a horizontal support beam that is attached half way up the sloping sides);
Correlation last revised: 8/18/2015 |
June 7, 2022
Nucleon Mass Corrections to the Rates During Big Bang Nucleosynthesis
[.2in] Bartol Research Institute
University of Delaware, Newark, DE 19716
The thermal rates for converting neutrons to protons, and vice versa, are calculated, including corrections of order divided by a nucleon mass. The results imply that the primodial helium abundance predicted for big bang nucleosynthesis has been systematically underestimated by about , i.e., .
The purpose of this paper is to evaluate nucleon mass corrections to the rate of weak transitions that interconvert neutrons and protons during the early stages of big bang nucleosynthesis[2, 3]. In the usual calculation of these rates the nucleon mass is ignored; i.e. one includes all energies and momenta in the MeV range, specifically, ratios of the electron mass, , the temperature, , and the neutron-proton mass difference, ; but factors such as , i.e., a low energy scale divided by a nucleon mass, are ignored. These factors are individually of order a tenth of a percent, but it will be shown that together they cause roughly a 0.5% increase in the helium abundance predicted by big bang nucleosynthesis calculations. Such a systematic correction is significant in that it is comparable to the largest uncertainty in the standard hot big bang calculation - that due to uncertainty in the neutron half life. Further, the increase in the predicted helium abundance translates into a tighter constraint on the density of baryons as well as a strengthening of particle constraints based on big bang nucleosynthesis - such as the limit on the number of neutrino species that may be in thermal equilibrium in the early Universe.
As an example of the sort of effect that is usually ignored, consider the neutron to proton abundance ratio in thermal equilibrium. Including the first correction in an expansion in inverse powers of this ratio is
Usually one includes just the ‘Boltzman factor’ and ignores the correction, which is small, . Thus, even if freeze out of the weak reactions occurred at the same time, one might expect the neutron abundance to be slightly higher if nucleon mass corrections were included.
Of course, it is essential that the neutron fraction drops out of thermal equilibrium as the weak reactions become slow compared to the expansion rate of the Universe, and so one does not calculate the neutron abundance by equilibrium arguments in a numerical calculation. Instead one evaluates the rates for conversions and tracks carefully the maintenance of approximate equilibrium at high temperatures and the failure to maintain equilibrium at low temperatures. It is, therefore, necessary to evaluate the change in the rates - not just the equilibrium neutron fraction. There are many corrections to the rates and it is the purpose of this paper to enumerate and evaluate them in a systematic fashion.
The spirit of this paper is similar to those which evaluated the electromagnetic radiative, thermal, and coulomb corrections to the processes[6, 7]. In both cases, the corrections are a few percent at most. To achieve a satisfactory level of accuracy, one part in a thousand, it is necessary to evaluate only the first correction, but not effects of order or, in the electromagnetic case, of order . Nor is it necessary to consider terms of order .
Another similarity in the two problems concerns the normalization of the corrections. The nucleosynthesis numerical codes typically normalize the weak rates to the experimental value of the neutron mean lifetime, . Thus, when evaluating a purported correction to the rates one must also evaluate the same sort of corrections for neutron decay, and adjust the corrections appropriate for BBN accordingly. So, for example, the largest term in the order radiative correction to the weak rates is a constant which also shows up in neutron decay. Similarly, a good part of the coulomb correction to the weak rates also cancels. Thus, the early numerical code of Wagoner[8, 9] contained a simple coulomb correction and no radiative correction, but although individual reactions have corrections of , the net effect of a more detailed treatment results in less than a 1% correction to Wagoner’s results. In contrast, the corrections are of order 1% to the reaction rates, but the comparable correction to neutron decay is smaller due to kinematic thresholds. As a result, nearly the whole of the effects discussed here survive to affect the helium abundance.
With these thoughts in mind the rest of the paper is ordered as follows. In section 2, the main results are presented - the corrections to the rates to first order in . In section 3, similar effects are considered for neutron decay. Section 4 combines the results from the previous two sections to arrive at an expected change in the helium abundance. Section 5 contains a discussion of the significance of the results.
First, however, it may be useful to the reader to clarify some of the notation used later. Except where the neutron or proton mass is explicitly indicated by or , the nucleon mass is given as . In the formulae for cross-sections, rates, etc., refers to the initial nucleon mass, but to the extent that the formulae are only accurate to it makes no difference which nucleon mass is actually used. Also, and are the energy and momentum of the initial lepton in the rest frame of the fluid. Unless specifically noted, the energy denotes the quantity , where . This is only equal to the outgoing lepton energy in the infinite mass limit, . is the corresponding momentum, . During nucleosynthesis the temperature describing the neutrino distribution, , is not equal to the temperature of the rest of the plasma (including the nucleons), denoted by . When the temperature is used, it refers to the temperature which describes the outgoing lepton.
2 Corrections to Scattering Processes
There are six processes that contribute to conversion in the early Universe; neutron decay , inverse neutron decay , and four scattering processes, , and . The most critical time is when these reactions are ‘freezing out’, i.e., when they are just failing to maintain thermal equilibrium. This occurs at a temperature . At that time the scattering processes dominate over neutron decay and inverse decay by a factor of about 1000. The reactions which convert neutrons to protons are some 6 times greater than the inverse reactions, due to the nuclear mass difference affecting phase space. The reactions involving antileptons are nearly equal in importance to those involving leptons. To achieve an accuracy of 0.1% it therefore seems sufficient to consider just corrections to the scattering rates; however, because of the role played by neutron decay in normalizing the weak rates those corrections must also be examined. Accordingly, this section presents corrections to scattering and the next examines neutron decay.
The rate for two body scattering reactions in a medium may be written in the form
where is the four momentum, is the three momentum, is the energy of each particle, and for the problem at hand all particles obey Fermi statistics. The occupation numbers take thermal equilibrium values only for those species actually in equilibrium. In this notation the squared matrix element, , has been summed over all spin degrees of freedom and it is assumed that the do not depend on spin. The presence or absence of right-handed neutrinos is irrelevant for the evaluation of the scattering rates. For the reactions of interest, let particles 2 and 4 be the in and out nucleons, respectively, and let particles 1 and 3 be the leptons or anti-leptons.
2.1 The infinite mass limit
Before going into the details of nucleon mass corrections it is appropriate to evaluate the reaction rates in the limit of infinite nucleon mass. In this limit the energies of the in and out leptons are related by , where depending on whether a neutron or proton is the initial nucleon. Further, neither lepton occupation number will depend upon the scattering angles. The rate can then be rewritten in the familiar form
where is the cross-section for the reaction summed over both initial and final spins and is the relative velocity of the two initial particles, which for infinite mass nucleons may be taken to be just the initial lepton velocity, . It is useful to concentrate on the rate per initial state in the absence of blocking, . For infinite mass nucleons, this quantity becomes
where is the cross-section for a lepton of energy incident on an infinitely heavy nucleon, is Fermi’s constant, is the Cabibbo angle, with , and is the ratio of the axial vector to vector coupling of the nucleon for charged currents. The last relation in Eq. 4 serves as a definition of the constant , a factor which will be common to all the weak reactions. The coulomb and radiative corrections to the rates are embodied in an electromagnetic correction factor ; however, as explained in the introduction, all corrections to the weak rates are small and may be treated independently. It is therefore acceptable to ignore except when worrying about the overall normalization of the rates, and so the factor will be dropped.
For heavy nucleons, low baryon density and low lepton asymmetry, it is appropriate to approximate by a Boltzman distribution and ignore entirely. Integrating over nucleon momentum and lepton direction, converting the lepton momentum integral to one over energy, and using thermal distributions for the leptons one gets the rate,
where is the spatial density of initial nucleons, and is the temperature describing lepton . Eq. 5 defines the differential interaction rate , which is plotted in Fig. 1 for each of the four scattering processes. The plots were generated using temperatures , and . This is near the conventional “freezeout temperature”, i.e. that temperature where the equilibrium abundances are equal to the final values, as if the interactions were very rapid and then turned off abruptly. The freezeout point is high enough that electron annihilation has caused the photon temperature to increase by only a small amount over the neutrino temperature. Note that the rates in Fig. 1 are some 6 times greater than the rates, as required to maintain equilibrium at this temperature.
To give a better feel for the important points in determining the neutron fraction, Fig. 2 shows the integrated scattering rates, , for the four scattering processes as a function of the photon temperature, . The expansion rate, , is also shown; along with the free neutron decay rate. The reactions freezeout first, and become increasingly unimportant at lower temperatures. The scattering rates freeze out later. They are more important than free neutron decay down to a temperature , but what really counts is the comparison to . After the most significant comparison to is free neutron decay just at the time the “deuterium bottleneck” breaks, at which time . Corrections to the scattering rates for are not very important.
Apart from electromagnetic corrections, Eq. 5 is the reaction rate used in nucleosynthesis calculations. There are several points where infinite mass nucleons were used. Merely writing the reaction in the form of a cross-section required that the final state occupation numbers did not depend on the scattering angles, and this depends on the assumption that no recoil energy goes to the nucleon. The vector and axial vector cross-section, Eq. 4, has corrections of order . In addition, the vector and axial vector Lagrangian must be corrected for nucleon structure effects, such as momentum dependent form factors, or new terms in the effective low energy effective Lagrangian, such as ‘weak magnetism’. The relative velocity, , must be corrected for the nucleon velocity. One must average over the Boltzman distribution, , to produce a ‘thermal averaged’ cross-section relative velocity blocking effects due to the Fermi statistics. Since the nucleon velocity is of order one must expand in the nuclear velocity to second order to get corrections to first order in the nucleon mass. One implication of this is that there may be correlations between corrections that are first order in . Although first order terms vanish when angle averaged their correlations may not, and can therefore contribute at first order in .
As presented here, these calculations are done by evaluating corrections to the rates, , and the blocking factors , as a function of the initial lepton energy. After taking appropriate angular combinations, the corrections are integrated over lepton energy to produce corrections to the conversion rates per nucleon, which may be used in the rate equations to solve for the neutron abundance as a function of time.
The most obvious corrections to consider are corrections to the cross-section, which are combined with the zeroth order, or infinite nucleon mass, values for and . The corrections to will be calculated in section 2.2. When the nucleon velocity is taken into account there will be corrections to due to the altered lepton energy, as well as corrections intrinsic to . These are evaluated in section 2.3 along with the corrections to the blocking factors. These corrections and their correlations will be expressed as effective corrections to the rate , which can be multiplied by the zeroth order and .
As a preliminary to this, consider the differential cross-section to order ,
The zeroth order total cross-section, , is given by Eq. 4. The correction to the total cross-section is . The relevant angular dependence of the differential cross-section is given by , which is to be multiplied by with being the center of mass scattering angle. Terms higher order in are suppressed by two powers of (or powers of ) and may be dropped. If there were no corrections to the blocking factors for the final state leptons would integrate to zero when averaged over scattering angle; however, there are corrections to the blocking factors. Since these are suppressed by factors of one need only keep the zeroth order term of ,
Discussion of the corrections to the rates due to is postponed till later, after evaluating the corrections to the lepton blocking factors in section 2.3.
2.2 Corrections to the cross-section
There are two important corrections in , one that arises from including the weak magnetism term in the interaction, and one that arises from modifications to the final state phase space due to the recoil of the nucleon. They may be treated independently to first order in .
The effective low energy weak Lagrangian is,
where the leptonic current has the usual structure and the hadronic weak current is given by
where is the anomalous weak charged current magnetic moment of the nucleon, is the pseudoscalar coupling to the nucleon, and is the momentum transfer to the nucleon. At higher energies, one would treat the couplings , , , and as form factors with corrections of order , where the differ for the different interactions and are experimentally determined to be in the range . Thus, the form factor corrections may reasonably be assumed to be higher order than the corrections considered in this paper.
The full squared matrix element for scattering with the current in Eq. 9 is given in Appendix A, but here only the relevant terms are kept. The pseudoscalar coupling is usually approximated by the pion pole term. At low momentum transfer this leads to a suppression of the amplitude by a factor of , where is the -nucleon coupling and is the pion decay constant. Since this is small it is dropped from further discussion. Weak magnetism is generated by the term. There is an explicit factor of in the coupling, so one may ignore the square of the weak magnetism term, but there may be interference between weak magnetism and the vector and axial vector interactions. The interference with the vector interaction vanishes at order , which leaves just a correction proportional to ,
where the zeroth order is acceptable since there is already one power of in the correction .
Next, consider the corrections to the usual plus interactions. These will be referred to collectively as the ‘recoil’ correction, since a major component of the correction may be understood as a reduction in the phase space for the outgoing lepton due to the energy carried off by the nucleon. The correction is calculated in the frame of the target nucleon by 1) expressing the differential cross-section in terms of the invariants , , and the particle masses, 2) expressing and in terms of the incident lepton energy, 3) integrating over phase space, and 4) extracting all terms to the required power of . One must keep the full expression for , , since the leading part of cancels in some parts of the calculation. The invariant may be written as , where to first order in
Integrating over , keeping just the term proportional to , and applying the appropriate normalization yields
Note that the interference between the and currents has exactly the same structure as that between the axial vector and weak magnetism interactions.
2.3 Thermal averages of and
For the remaining corrections one must perform averages over scattering angle and/or thermal averages over the nucleon momentum. The strategy presented here is to evaluate these corrections separately for the lepton blocking factor , and for the product . Each is developed as a power series in the cosines of the scattering angle and of the incident angle of the initial lepton momentum relative to the nucleon momentum, labeled by . It is only necessary to include terms up to , since each factor of comes accompanied by the nucleon velocity, which is of order . Further, terms first order in and integrate to zero and may be dropped, although only after the two series are multiplied together to pick up the angular correlations between the corrections to and .
The thermal averaged for a lepton of energy is given by
Eq. 4 can be used for with just two changes. First, one must use the lepton energy in the nucleon rest frame,
where here is the relativistic factor for the initial nucleon. Second, must be multiplied by a factor
to account for the change in lepton flux seen in the nucleon rest frame. The thermal average is then done by expanding in powers of and replacing by its thermal average, . This procedure is totally equivalent to the more standard practice of using the Lorentz invariant cross-section with and using the Lorentz invariant flux factor
However, a difficulty arises in the use of Eq. 16. When is expanded in terms of terms of order are generated, but there is a region of phase space where the lepton velocity is small compared to , and this expansion is not valid. Using the rest frame and Eq. 15 avoids this problem for the incident lepton velocity.
The result of performing the thermal average is an effective correction to for incident lepton energy ,
The last term in Eq. 17 presents a problem akin to that just discussed concerning ; namely, when the reaction energy is near threshold the final state lepton velocity will be small if that lepton is massive, i.e. it is an electron or positron. This is not a problem for the reaction since the positive value always keeps the electron energy well above threshold, but it is a problem for . It is also not a difficulty for reactions with final state neutrinos since then .
The anomalous powers of are symptomatic of a deeper problem with the thermal averaging. The averaging procedure adopted here is only valid when the change in outgoing lepton momentum due to nuclear mass effects is small compared to its value when the nucleon mass is taken to infinity. This is not true near threshold[12, 13], where . As an example, consider an incident lepton whose energy is the threshold energy for a nucleon at rest. Then, for those nucleons moving with the effective reaction energy is above threshold. Thus, after thermal averaging, the threshold should no longer be sharp. Fortunately, the reaction rates are not dominated by the behavior near threshold, since phase space vanishes there. The error introduced by the adopted procedure seems to be acceptably small, as will be discussed later.
Now consider the lepton blocking factor . Assuming that the leptons are in thermal equilibrium (see Dodelson and Turner for a discussion of this point) the blocking factor is
where is the true energy of the outgoing lepton. The factor depends only on the energy of the outgoing lepton. Unfortunately, is a function of both the scattering angles and the relative motion of the initial lepton and nucleon, so an integration over all of phase space is unavoidable.
The relevant corrections to the blocking factor are derived in Appendix B and given in Eq. Appendix B: Corrections due to the final state occupation number. There are four corrections, organized by factors of and , that constitute the blocking factor up to order . The corrections are normalized by , the zeroth order term, so that the full blocking factor is
is first order in and should be combined only with the zeroth order part of to produce an effective correction to the cross-section
which generates a correction to the rate when integrated over incident lepton energy. The next term is of order and proportional to . When combined with the correction to due to thermal averaging, an effective correction is produced
Finally there are two pieces that are proportional to , which must be combined with the dependent part of the differential cross-section, . The first piece, is proportional to and must be combined with only the part of to yield a correction
The second piece, is angle independent but of order and is combined only with the leading piece of to yield a second correction
In the following section, these two terms are combined to form a single correction, .
2.4 Results for the corrected rates
In the previous two sections six corrections to the weak rates that are formerly of order were identified: , , , , , and ; which should be combined with and integrated over to produce corrections to the rates. Fig. 3 shows a plot of for each of the six corrections to the reaction , at .
First, consider the three small terms , , and , which have all been exagerated by a factor of 100 in the figure. Clearly, these three terms are much smaller than the other three. The main reason for this is easy to understand. Due to the dependence of the cross-section and powers of in phase space, the rates are dominated by leptons with . At this point the blocking factors are small, and corrections to them are even smaller. This can be seen explicitly in Appendix B, where it is shown that each correction to carries at least one extra factor of . In addition, the proliferation of terms in the expansions leading to , , and is greater than for the terms associated with the blocking factors. A third factor suppresses , namely that it is proportional to , which is numerically about a tenth of which comes into the thermal corrections. For all these reasons, the three small corrections are dropped from most of the discussion that follows.
Now turn to the three larger corrections, beginning with that for weak magnetism. Fig. 4 shows weighted by phase space considerations to produce a differential interaction rate per baryon, , that can be found by substituting for in Eq. 5. The scale for this graph should be compared to Fig. 1. The corrections to each reaction are of order 1% at , but apart from small contributions near thresholds one can see that there is an almost exact cancellation between the lepton reactions ( and ) and the anti-lepton interactions ( and ). This is due to an effective change in sign for the value of when considering leptonic and antileptonic scattering, i.e., the anti-leptonic current is right handed. Thus, although the corrections are large for each of the individual reactions, the net effect on nucleosynthesis due to weak magnetism is fairly small. It is not, however, totally negligible. There are differences in the phase space details for the different channels, and the neutrino temperature is in fact less than the electron temperature. As a result, when the photon temperature is the channel is slightly more important than the channel, and weak magnetism causes a small decrease in . This, in turn, causes a slight increase in . At cooler temperatures, the electron density drops and the channel becomes insignificant, but by then the channel is also small and the weak magnetism corrections are not important then.
Next, consider Fig. 5, which shows the correction to the differential reaction rate due to recoil effects, . Here the sign of the effect is the same for all reactions. The final state phase space for the outgoing lepton is reduced and this causes a reduction in the cross-sections at of about 1%. The magnitude of the reduction increases with temperature. This can be seen by examining the curve in Fig. 3 where the fractional increase in the recoil effect is seen to increase approximately linearly with energy. When weighted by a thermal distribution the fractional change in rate will increase with .
Even though all the reactions are affected in a similar way that does not imply that there will be no effect on nucleosynthesis. Since all the rates are reduced, freezeout of the neutron-proton ratio will take place a little earlier, when the neutron abundance is higher. As a result there will be more helium. Further, the rates are not reduced in proportion to the zeroth order rates, so there may be a shift in even at high temperatures, when the rates are fast. These effects will be discussed further in section 4.
The third important correction is that due to thermal averaging over the nucleon momentum, illustrated in Fig. 6. Here again all reactions are affected in a similar way, only now the rates are slightly increased. The increase is due to the fact that the average collision energy is slightly enhanced by the nucleon motion, and since the cross-sections increase with energy, the rates increase due to this effect. Comparison of Fig. 5 and Fig. 6 shows that the thermal averaging effect is about 1/3 the effect due to recoil, so that the net effect of the two processes is to decrease the reaction rates.
The total reduction in rate arises from integrating over initial lepton energies. Fig. 7 shows the fractional change in rate, for the four scattering reactions as a function of . The curves include all six terms shown in Fig. 3. The reduction increases nearly linearly with temperature, although there are deviations at low temperatures. The linear increase is a consequence of the fact that of the three small parameters, , , and , the latter is by far the largest. The coefficient, 5, reflects the increase of cross-section and phase space with initial lepton energy.
Another feature of Fig. 7 is that at high temperatures the corrections to are either less positive or more negative than the corresponding corrections to . This is not unexpected since the order correction to the equilibrium abundance of neutrons should result in a 0.2% increase in , and this must be reflected by a change in the rates which maintain equilibrium. At low temperatures two things happen. First, the neutrino and photon temperatures are no longer equal, so equilibrium arguments no longer apply. Second, the difficulties with the threshold behavior in the channel become apparent. Fortunately, the threshold behavior does not become a problem until and by that time the absolute rate of the reactions is so small (see Fig. 2) that the error to the correction to is insignificant.
3 Corrections to Neutron Decay
As mentioned in the introduction, the rates used in big bang nucleosynthesis calculations are not usually calculated from first principles, but are normalized to the experimental lifetime for neutron decay. Originally this had the advantage of partially accounting for some of the effects left out of the calculation, such as the coulomb and radiative corrections. In the present case, this convention requires us to calculate the recoil corrections for neutron decay, since those corrections are, in effect, already included in the numerical BBN codes.
Write the scattering rate for one of the channels as
where is the zeroth order scattering rate, and is the term normalized to . Similarly, the neutron decay rate may be written as
where the decay rate is approximated by the sum of zeroth and first order terms in an expansion in . The zeroth order scattering and neutron decay rates are related, schematically, , where is some function of temperature and the particle masses. Since the nucleosynthesis codes are normalized to the experimental decay rate, but include no recoil corrections they effectively use a scattering rate . The correction to the current calculations may then be estimated
In the last section the various were calculated, implicitly; in this section the corresponding is evaluated.
For laboratory neutron decay it is only necessary to evaluate the recoil corrections - there are no thermal averages, nor any blocking factors. Although weak magnetism affects the angular correlations of the decay products, its effects drop out of the total decay rate at first order because the interference term with the axial current is zero when integrated over leptonic phase space. This can be used as a check of the calculation.
The decay to three bodies can be put into a form similar to that for scattering processes,
where is identicle in form to that for the cross-sections but with evaluated for an ‘initial’ lepton energy equal to minus the energy of the corresponding lepton in the decay. It is then straightforward to use . Graphs of the corresponding differential decay spectra and corrections are shown in Fig. 8. One can see that weak magnetism contributes to the asymmetry but not to the total decay rate; however, the recoil correction does reduce the decay rate, by an amount
Noting that we have not included Coulomb and radiative corrections, the value for the zeroth order neutron halflife is , while the halflife including recoil is . In the next section, where the rate equations are solved for , it will be advisable to account for as much of the Coulomb and radiative corrections as possible so as to isolate the corrections due to nucleon mass effects. To do this it should be adequate to adjust both the neutron decay rate and the scattering cross-sections, by a constant factor. This can be done easily by increasing the effective Fermi constant by .
Wilkinson has performed a comprehensive examination of the corrections to neutron decay. In an effort to obtain a reliable accuracy at the level of one part in , he evaluated all effects that would plausibly contribute at a level . These include recoil, weak magnetism, radiative, and coulomb corrections to second order as well as other small corrections, e.g. due to the finite size of the nucleons. Specifically, his Table 4 includes a recoil correction of . This result differs from Eq. 28 in magnitude and sign (!), but the difference is due solely to different definitions of what is meant by the recoil correction.
Wilkinson writes the decay rate as
where is a constant, and is the electron endpoint energy, including recoil effects. He then identifies the recoil correction as
but this does not include the correction to the integral due to the change in the electron endpoint energy from to ,
The change due to the endpoint of integration is small since the integrand vanishes there in any event, but the decrease in the integrand by is significant. Wilkinson includes this term in his definition of the zeroth order phase space integral, whereas in the current paper is included as part of the recoil correction. The current nucleosynthesis codes assume that the change in lepton energy is , which is the zeroth order value for the endpoint conventions used in this paper. Even though Wilkinson puts into the zeroth order phase space integral, it is still present in his full phase space factor, correct to second order in . Therefore, results of neutron decay based on Wilkinson’s work should be valid.
The recoil correction to neutron decay should not be applied to neutron decay in the early Universe, since the rate used in the code is the experimentally determined value. There is, however, a small thermal correction to neutron decay due to the thermal averaged time dilation factor. The neutron decay rate should be divided by a factor of . Since neutron decay is more important at late times when this correction, although technically of order , is numerically quite small.
4 Estimate of the change in
All the pieces are now in place to estimate the change in . No effort will be made in this paper to incorporate the modified rates in a full nucleosynthesis code. Rather it should be sufficient to examine the evolution of the neutron fraction down to with and without the nucleon mass corrections. The increase in due to these corrections is given by twice the increase in the neutron fraction, .
To perform the evolution, a simplified numerical model of the early Universe was constructed. One sector included neutrons, protons, electrons, and photons in thermal equilibrium at a temperature . The other contained three neutrino species in equilibrium at a temperature . Account was taken of annihilation for keeping track of the energy density and the expansion rate of the Universe, so that in general . The effect of different temperatures was included in the rate calculations.
The zeroth order scattering rates, Eq. 5, and the corrections Eqs. 10, 12, 17, 20, 21, 22, 23 were calculated on a logarithmic temperature grid and interpolating functions were created that reproduced the numerical integration (at new points) to better than a part in over the temperature range, . This was done for each of the four channels. The experimental rate for neutron decay was modified by the thermal lorentz dilation factor. The rates for were inferred using and the known equilibrium neutron fraction, under the assumption that . Since this channel is numerically unimportant the error introduced by this procedure is not important. The reaction rates are then
These rates were used to solve for by
where and are sums over the appropriate reaction rates. is the expansion rate given by
where is Newton’s constant and is the density in species calculated for the appropriate mass and temperature. The photon and neutrino temperatures were derived assuming adiabatic expansion and totally decoupled neutrinos.
The integration was started at . For the zeroth order case the initial neutron to proton ratio was set to , but for the calculation with corrections the initial value was set to . In fact, the end results are essentially independent of initial conditions since the reaction rates are so fast that dynamic equilibrium is quickly achieved.
Fig. 9 shows the resulting . The equilibrium values are also shown to illustrate the freezeout of the scatterring reactions, followed by the slower neutron decay. The breaking of the deuterium bottleneck is defined, in an ad hoc way, to occur when . This happens at .
The zeroth order and corrected results for are so close that the difference cannot be shown in Fig. 9. To bring out the correction, is plotted as the solid curve in Fig. 10. The maximum correction occurs around freezeout, but is diminished by neutron decay until the deuteron bottleneck breaks and the remaining neutrons are cooked into . The correction to at this point is yielding a correction to the helium abundance of .
It is interesting that at high temperatures the reaction rates for the model with nucleon mass corrections do not appear to reproduce the equilibrium neutron fraction, shown as the dotted curve in Fig. 10. The difference can be understood as being due to corrections that are second order in - both in the equilibrium abundance and in the rates.
A test of this can be done by forming a residual which should vanish through first order in when ,
A graph of is shown as the solid curve in Fig. 11. At high temperatures is increasing because the second order corrections are increasing. At one finds which accounts for most of the difference between and in Fig. 10. At lower temperatures, , there is no problem with the corrected rates producing corrected equilibrium fractions, rather one only needs to ascertain that is much less than the individual first order corrections . Indeed, the residual is much smaller than the individual corrections (typically a few percent) for .
At very low temperatures again becomes significant. The problem goes back to the poor threshold behavior of the reaction. This was checked by arbitrarily taking , which should alleviate the threshold problems, and increasing . In that case, scaled as across the full temperature range .
Fig. 11 also shows several other examples of with different terms included in the rates. The solid curve at the bottom shows in the limit of infinite mass nucleons, and . The level of the result reflects the accuracy of the numerical integration. The dotted curves show for in the cases where includes a) recoil, b) thermal averaging, c) recoil and thermal averaging, and d) recoil, thermal averaging, it and the small blocking corrections. For both cases c) and d), is smaller than in the previous case as more of the terms necessary to achieve thermal equilibrium are included. The magnitude for case d) is indicative of the nature of . Note that it is not necessary to include the weak magnetism corrections in this analysis, since one can consistantly imagine another world where , and should still vanish to second order.
The conclusion of these investigations is that the numerical accuracy of the approximations and numerical integrations is adequate for temperatures below . The major weakness is the poor threshold behavior, which induces errors of order the correction in the channel for . Since this channel is not so important then, the numerical accuracy of the corrections presented here are estimated to be about 10% ( equivalent). There are also errors at higher temperatures since the corrections are only first order in , but these errors are dynamically erased by the fast reaction rates that persist down to freezeout.
It would be useful to have a simple approximation for the corrections, since encoding the full expression into a nucleosynthesis code and performing the phase space integrals at each step would be a time consuming exercise. An approximation, linear in was developed,
which represents averages for the two channels that enter into the forward or back reactions. As such these may be readily applied to the polynomial formulae used in Wagoner’s code to approximate the and reaction rates. Before doing this one must separate out those pieces due to neutron decay and inverse decay and treat them on a separate footing, as in Eq. 4. The approximations in Eq. 4 do not include the correction to the neutron lifetime, so this must be added in separately.
The result of carrying out this procedure for the simplified model of the early Universe used in this paper is shown as the dashed curve in Fig. 10. The solution for matches that derived from direct integration of the rates to better than 10% for temperatures less than . This is comparable to the estimated uncertainty in the calculation of the rates due to the improper treatment of the threshold effects. The parameters in Eq. 4 were chosen by fitting the reactions in the temperature range , and the reactions in the range . These ranges cover freezeout for the different channels and avoid, for the most part, sensitivity to the threshold behavior of the rates.
Finally, to isolate the effects of weak magnetism, was calculated with a set of rates where was set to zero. The resulting increase in was instead of ; i.e.about 1/4th of the net increase in can be attributed to weak magnetism. The bulk of this contribution comes at where the channel is slightly more important than the channel because of kinematics and also because is slightly greater than .
It doesn’t really make sense to perform a similar calculation to try and isolate the recoil vs. the thermal averaging corrections. The point of the analysis of the residual is that both are necessary to achieve a sensible thermodynamic result if one were to take . Even so, including just recoil corrections to the rates, leads to a change in of just 0.0002. This is somewhat surprising since the recoil corrections were larger than and of the opposite sign to the thermal averaging corrections. Based on this, one might have expected the recoil corrections to give a correction to of order , which would be partly compensated by the thermal averaging corrections. This is not the case. An explanation can be found in the details of Fig. 1 and Fig. 5, where the corrections can be seen to be not simply proportional to the zeroth order rates.
The main point of this paper is that the primordial helium abundance predicted by big bang nucleosynthesis calculations should be increased by . It is difficult to attach a firm level of uncertainty to this number, but the results displayed in Fig. 11 and the accompanying text suggest that an uncertainty of 10% should be inferred.
This is a significant correction, but does not dramatically alter the conclusions that may be drawn from studies of BBN. Consider the changes implied for the baryon density of the Universe as inferred from nucleosythesis calculations. Walker and Kernan[17, 18] have recently analyzed the uncertainties in the big bang helium calculation, but they do not include the corrections due to nucleon mass effects. Adapting their result to include the results presented here, the primordial helium abundance for the standard cosmology with three neutrino species is
where parameterizes the baryon density. Without the corrections, the first coefficient would be 0.2398, instead of 0.2410. In Eq. 38, the first uncertainty represents a error due to uncertainties in the nuclear reaction network, of which “80-90%” is due to uncertainty in measurements of the neutron decay rate. The second uncertainty allows for some of the smaller corrections to the weak interaction rates, for example, the deviation of the neutrino spectrum from thermal equilibrium. The nucleon mass corrections, approximately in , are equivalent to about a shift in the neutron decay rate, and are much larger than any other known uncertainty in calculating the weak rates.
It is difficult to determine the primordial abundance of through direct observation due to a) the inert nature of neutral helium, b) chemical pollution through stellar burning, and c) the high accuracy of the measurement required - better than 1% is desired. Walker, et al. suggest a primordial abundance in the range . The limits are suggestive of 95% confidence levels, but there is no statistically rigorous upper bound to the helium abundance. For the sake of argument then, take 0.24 as the upper limit and allow the uncertainty due to the neutron lifetime in Eq. 38 to be favorable at the level, i.e., allow the two uncertainties to add to the helium abundance111 In fact the was derived using a uncertainty in the neutron lifetime, where the current particle data book uncertainty is , so the uncertainty due to the reaction network should become . The corresponding decrease in the maximum allowed value of would decrease from 3.18 to 3.10.. These constraints require . Without the 0.0012 correction due to nucleon masses the corresponding number is . These numbers should be compared with the constraints derived from comparisons of observations of D, , and and BBN calculations, .
Taken at face value, a substantial portion (but not all) of the allowed parameter space for is eliminated by the nucleon mass correction; however, one should always keep in mind the difficulties of helium observations. If the upper limit were there would be no significant constraint from the consideration of . On the other hand, the discussion in the previous paragraph was based on two separate favorable assumptions, both at the level: a) allowing , and b) taking the neutron lifetime to be near the lower end of the allowed range. Dropping either of these assumptions from the favorable to the neutral category eliminates any allowed values of .
Another use of the primordial helium abundance is to constrain the energy density at the time of nucleosynthesis. This is often parameterized by the number of neutrino species, . The corrections are equivalent to neutrino species. Again, belief in constraints placed on particle physics models depends upon one’s faith in the helium observations.
Over the years, there have been several papers written which treat recoil corrections in processes. In addition to the Wilkinson paper on neutron decay, Fayans and Vogel have studied recoil and weak magnetism corrections to the reaction in the context of laboratory neutrino oscillation experiments. The results here are in agreement with Fayans for both recoil and weak magnetism. There is also agreement with Vogel concerning weak magnetism. It is more difficult to compare to Vogel’s results for the recoil correction, since he gives the correction in terms of the final state lepton energy, whereas the results in this paper express the corrections in terms of the initial lepton energy. I know of no paper which deals with the thermal corrections or the corrections to the blocking factors that are relevant for the big bang nucleosynthesis scenario.
Acknowledgements While this work was underway I became aware of the work by Kernan and Walker, who were also beginning to look into the question of recoil corrections. I am indebted to them for sharing the details of their previous work and thoughts about the issues presented here. I would also like to thank S.M. Barr, P. Vogel, J. Engel and E.W. Kolb for useful discussions. This work was partially supported by DOE grant DE-AC02-78ER05007, and by the University of Delaware Research Foundation.
Appendix A: The squared matrix element
For completeness, here is the spin summed squared matrix element for the Lagrangian in Eq. 8 and Eq. 9, to leading order in . The terms are grouped by coupling constant and expressed in terms of relativistic invariants. The invariant has been eliminated in favor of and and particle masses. The particle identifications are; 1: incoming lepton, 2: incoming baryon, 3: outgoing lepton, 4: outgoing baryon. In this expression the vector and axial couplings are given explicitly as and , instead of specifying the ratio , as in the text. |
« iepriekšējāTurpināt »
1 Determine the amount of the inoculum by the use of test plates. 2 Use dilution of the suspension that gives 25 percent light transmission in lieu of the stock suspension.
(b) Preparation of working standard stock solutions and standard response line solutions. For each antibiotic listed in the table in this paragraph, select the working standard drying conditions, solvent(s), concentrations, and storage time for the standard solutions and proceed as follows: If necessary, dry the working standard as described in 8 436.200; dissolve and dilute an accurately weighed portion to the proper concentration to prepare the working
standard stock solution. Store the working standard stock solution under refrigeration and do not use longer than the recommended storage time. Further dilute an aliquot of the working standard stock solution to the proper concentrations to prepare the standard response line solutions. The reference concentration of the assay is the mid concentration of the response line.
'Further dilute aliquots of the working standard stock solution with dimethylsulfoxide to give concentrations of 12.8, 16, 20.0, 25, and 31.2 micrograms per milliliter.
& Weigh a separate portion of the working standard and determine the loss on drying by the method described in $436.200(c) of this chapter. Use this value to determine the anhydrous
Working standard should be stored below minus 10o C under an atmosphere of nitrogen. Netilmicin sulfate is hygroscopic and care should be exercised during weighing.
corrected diameters, including the average of the 36 diameters of the reference concentration on 2-cycle semilog paper, using the concentration of the antibotic in micrograms or units per milliliter as the ordinate (the logarithmic scale), and the diameter of the zone of inhibition as the abscissa. The response line is drawn either through these points by inspection or through points plotted for highest and lowest zone diameters obtained by means of the following equation:
3a + 2b +c-e L=
3e +20 +c-a H =
where: L=Calculated zone diameter for the lowest
concentration of the standard response
(c) Procedure for assay. For the standard response line, use a total of 12 plates—three plates for each response line solution, except the reference concentration solution which is included on each plate. On each set of three plates, fill three alternate cylinders with the reference concentration solution and the other three cylinders with the concentration of the response line under test. Thus, there will be 36 reference concentration zones of inhibition and nine zones of inhibition for each of the four other concentrations of the response line. For each sample tested use three plates. Fill three alternate cylinders on each plate with the standard reference concentration solution and the other three cylinders with the sample reference concentration solution. After all the plates have incubated for 16 to 18 hours at the appropriate incubation temperature for each antibiotic listed in the table in paragraph (b) of this section, measure the diameters of the zones of inhibition using an appropriate measuring device such as a millimeter rule, calipers, or an optical projector.
(d) Estimation of potency. To prepare the standard response line, average the diameters of the standard reference concentration and average the diameters of the standard response line concentration tested for each set of three plates. Average also all 36 diameters of the reference concentration for all four sets of plates. The average of the 36 diameters of the reference concentration is the correction point of the response line. Correct the average diameter obtained for each concentration to the figure it would be if the average reference concentration diameter for that set of three plates were the same as the correction point. Thus, if in correcting the highest concentration of the response line, the average of the 36 diameters of the reference concentration is 16.5 millimeters and the average of the reference concentration of the set of three plates (the set containing the highest concentration of the response line) is 16.3 millimeters, the correction is +0.2 millimeter. If the average reading of the highest concentration of the response line of these same three plates is 16.9 millimeters, the corrected diameter is then 17.1 millimeters. Plot these
H=Calculated zone diameter for the high
est concentration of the standard re
sponse line; C=Average zone diameter of 36 readings of
the reference point standard solution; a, b, d, e=Corrected average values for the
other standard solutions, lowest to highest concentration, respectively.
To estimate the potency of the sample, average the zone diameters of the standard and the zone diameters of the sample on the three plates used. If the average zone diameter of the sample is larger than that of the standard, add the difference between them to the reference concentration diameter of the standard response line. If the average zone diameter of the sample is lower than that of the standard, subtract the difference between them from the reference concentration diameter of the standard response line. From the response line, read the concentrations corresponding to these corrected values of zone diameters. Multiply the concentration by the appropriate dilutior factor to obtain the antibiotic content of the sample.
(39 FR 18944, May 30, 1974)
EDITORIAL NOTE: For FEDERAL REGISTER CItations affecting $ 436.105, see the List of CFR Sections Affected appearing in the Finding Aids section of this volume. |
US 5642443 A
To determine the orientation of a set of recorded images, the recorded images are scanned. The scanning operation obtains information regarding at least one scene characteristic distributed asymmetrically in the separate recorded images. Probability estimates of orientation of each of the recorded images for which at least one scene characteristic is obtained are determined as a function of asymmetry in distribution of the scene characteristic. Probability of orientation for the set of recorded images is determined from the probability estimates of orientation of each of the recorded images in the set.
1. A method of determining orientation of a set of recorded images, comprising the steps of:
(a) scanning a set of recorded images, said set including a plurality of recorded images, to obtain information regarding at least one scene characteristic distributed asymmetrically in the separate recorded images;
(b) determining probability estimates of orientation of each of the recorded images for which at least one scene characteristic is obtained as a function of asymmetry in distribution of the scene characteristic; and
(c) determining a probability of orientation for said set of recorded images from the probability estimates of orientation of each of the recorded images in said set.
2. A method according to claim 1, further comprising the step of (d) image processing said set of recorded images according to the probability of orientation for said set of recorded images, determined in step (c).
3. A method according to claim 1, wherein step (c) of determining the probability of orientation for the set of recorded images is carried out using a Bayesian probability propagation model.
4. A method according to claim 1, wherein said at least one scene characteristic is color.
5. A method according to claim 1, wherein said at least one scene characteristic is modulation.
6. A method according to claim 1, wherein said step (a) of scanning comprises obtaining information regarding a plurality of scene characteristics.
7. A method according to claim 6, wherein said step (c) of determining probability of orientation for the set of recorded images is carried out employing information from different scene characteristics of the plurality of scene characteristics to determine said probability of orientation.
8. A method according to claim 7, wherein at least some of the plurality of scene characteristics are correlated to one another.
9. An apparatus for determining orientation of a set of individual recorded images, comprising:
a scanner which is operative to scan recorded images and provide digital signals representative of said recorded images; and
an orientation processor, coupled to said scanner to receive said digital signals, said processor being operative to execute the steps of:
determining the asymmetric distribution of at least one scene characteristic in said recorded images; and
determining the probability of orientation for said set of recorded images as a function of the determined asymmetric distribution of the at least one scene characteristic.
10. An apparatus according to claim 9, wherein said orientation processor is operative to determine asymmetric distribution by determining probability estimates of orientation of each of the individual recorded images, for which the at least one scene characteristic is obtained, as a function of the asymmetric distribution of said at least one scene characteristic in said recorded images.
11. An apparatus according to claim 10, wherein said orientation processor is operative to determine the probability of orientation for said set of recorded images includes in accordance with a Bayesian probability propagation model employing the determined probability estimates of orientation of each of the individual recorded images to produce said probability of orientation for said set of recorded images.
12. An apparatus according to claim 9, wherein said at least one scene characteristic includes color.
13. An apparatus according to claim 9, wherein said at least one scene characteristic includes modulation.
14. An apparatus according to claim 9, wherein said orientation processor is operative to determine said asymmetric distribution by determining the asymmetric distribution of a plurality of scene characteristics.
15. A method according to claim 1, further comprising the steps of:
(d) ascertaining a probability that a certain type of image recorder was used to create said set of recorded images; and
(e) utilizing said probability from said ascertaining step in step (c).
The present invention relates to the field of image processing, and more particularly, to a mechanism for automatically determinating the orientation of an order of recorded images.
Automatic digital imaging applications, such as the production of photographs on compact discs, and index prints on such discs, digitally printed automatic album pages, etc., require that the images be correctly oriented before the final output image format is generated. Currently these automatic procedures must be interrupted by a skilled operator who manually corrects any orientation failures, such as vertical (portrait) or upside down images. Upside down or inverted images occur with 35 mm cameras and most SUC's (single use cameras) that use right side load film transports. In these types of cameras, the film is loaded on the opposite side of the film gate relative to "normal" configuration cameras. The images produced by these cameras will be inverted in the final output format unless the exposed films are identified as such. Manual sorting of film is not possible prior to processing, since there is no way to determine if the camera was of the reverse wind variety. SUC's can be sorted, but this is a time consuming and costly process.
The knowledge of the image orientation of a scene also has application to convention optical printing. For example, the yield (percentage of acceptable/saleable) of automatic exposure determination and subject classification algorithms used with optical printers would be increased if image orientation information were available.
Thus, there is currently a need for a mechanism, which, when applied to entire customer orders, can discriminate film images captured in left side load cameras from images that were captured in right side load cameras. There is also need for a mechanism for automatically determining the orientation of an entire order of recorded images that are being processed, without human intervention.
In accordance with the present invention the above described needs are satisfied by an image processing mechanism which is operative to determine the orientation of a set of recorded images, by scanning a plurality of recorded images, to obtain information regarding at least one scene characteristic distributed asymmetrically in the separate recorded images. Probability estimates of orientation of each of the recorded images for which at least one scene characteristic is obtained are the determined as a function of asymmetry in distribution of the scene characteristic. A probability of orientation is then determined for the set of recorded images from the probability estimates of orientation of each of the recorded images in the set.
The present invention determines the orientation of a set (or "order") of recorded images of scenes, by examining the characteristics of each scene along the two long sides of the image. These sides represent the top and bottom of a "landscape" orientation. For purposes of the invention, it will be assumed that characteristics would be found in some fraction of landscape images, which were asymmetrically distributed, top to bottom. For example, green grass color may be found more often along the bottom of a scene, and in the same scene not green grass color may be found more often along the top. By this logic, scenes with green grass color along both sides as well as scenes without green grass color along either side would be indeterminate with respect to that particular characteristic.
For instance, if a green color exceeding a certain saturation is found all along the one side of a scene and it is not found along the other side, then, in over 90% of images, the green side is at the bottom of the scene. As a result, when this characteristic is later found in another image, it may be expected that the probability of the green side being at the bottom would still be about 0.90. Another characteristic with an exploitable asymmetry of distribution than can be expected to be found in scenes is sky color along the top of the scene.
The magnitude of the difference defining the characteristic is an important factor in determining the degree of the asymmetry in the distribution. Furthermore, it was expected that finding repeated instances of a characteristic difference all across the image would also increase the asymmetry in the distribution. The asymmetry in the distribution is a direct measure of the probability of determining the orientation for that scene correctly. For example, using a hypothetical characteristic, with a low magnitude for the characteristic difference and a low number of instances of that characteristic found, the distribution of occurrences top to bottom is relatively flat. The opposite is the case when both the magnitude and the number of instances is high. In this example the preponderance of the cases occur at the top of the image for this hypothetical characteristic.
With the present invention, no operator intervention is required to correctly orient images in automatic, digitally generated applications including photo compact discs, index prints, and Photo CD album pages. Another advantage of the invention is that existing scene balance algorithms can be used to provide information to the orientation processor. The information obtained from the image orientation processor can then in turn be used to improve the performance of the scene balance algorithm.
The present invention has the advantage that low resolution scan data from a number of different techniques can be used to determine image orientation. Also, a reliable indication from only one scene is adequate to determine orientation of an entire order. However, if a reliable indication is not obtained from an individual scene, a preponderance of evidence from several images can still be adequate to determine orientation of the entire order.
FIG. 1 shows a block diagram of an image processor constructed in accordance with an embodiment of the present invention;
FIG. 2 illustrates an exemplary sampling grid for an image;
FIGS. 3a and 3b illustrate the conversion of the 6×6 pixel array into 3×3 by subsampling;
FIG. 4 shows an example of the regions of the image that may be sampled;
FIG. 5 shows color axes and hue regions for determining color definitions;
FIG. 6 shows exemplary optimum axes and hue ranges used in the determination of color definitions;
FIG. 7 is a graph illustrating the relationship of probability of correct orientation to the average number of times a characteristic is found in an image;
FIG. 8a illustrates the increase of probabilities as a threshold is raised;
FIG. 8b illustrates the decrease in the probability of finding a high threshold as the number of images increases; and
FIG. 9 shows a uniform density gradient model.
Before describing in detail the new and improved mechanism for automatically determinating the orientation of an order of recorded images in accordance with the present invention, it should be observed that the present invention resides primarily in what is effectively a prescribed digital image processing technique that may be implemented by means of conventional digital signal processing circuitry, or may be embedded within image processing application software executable by the control processor of a digital image processing workstation, through which respective images of a scene are processed.
Consequently, the manner in which such images are scanned and applied to a digital image processor have been illustrated in the drawings in readily understandable block diagram format, which show only those specific details that are pertinent to the present invention, so as not to obscure the disclosure with details which will be readily apparent to those skilled in the art having the benefit of the description herein. Thus, the block diagram illustrations are primarily intended to illustrate the major components of the system in a convenient functional grouping, whereby the present invention may be more readily understood.
Referring now to FIG. 1, a block diagram of an embodiment of the apparatus used to perform the method of the present invention is diagrammatically illustrated. The "whole order" orientation method of the present invention operates using prescan data from a continuous roll of negatives passing through a high volume scanner. The continuous roll of negatives 10 includes a plurality of individual recorded images or "frames" 12. A scanner 14 digitizes the information contained in the frames 12 and provides this digitized information to a processor 16. The processor 16 includes the orientation processor 18 of the present invention, and a conventional image processor 20 for performing further image processing, such as enhancement, enlargement, cropping, etc. The image processor 20 provides the processed images to either a storage device 22 and/or an image reproduction device 24, which can be a thermal printer, for example.
The data provided by the scanner 14 is a continuous stream of 128 pixel lines. Groups of 192 lines are collected into individual images by a conventional frame line detection algorithm collecting data in parallel with the orientation method of the present invention and finishing slightly ahead of the orientation process. Alternatively, the groups of lines are collected by a pre-prescan film notcher which detects and marks the frame boundaries. It is advantageous to limit buffering of these lines of data as much as possible, and to collect all the necessary information on the initial pass through the scan data.
FIG. 2 shows an exemplary sampling scheme that is useful in the present invention. The image data is broken into 6 pixel by 6 line blocks. As can be seen, the 128×192 lines results in a grid of 21×32 sampling regions. (128=21×6(+2); and 192=32×6). This compares to currently used scene balance algorithms which use a 24×36 sampling grid. In an exemplary embodiment of the invention, the first and last pixel on each line are discarded.
FIG. 3a shows that the pixels in each block are averaged together 2×2 into a 3×3 array of subsamples. The averaging is done in Log Exposure space, a common procedure for scene balance algorithms. The nine 2×2 averages can be used in a number of ways. One way is to generate a value which is representative of the image at that point in the 21×32 sampling grid. FIG. 3b shows alternative groupings of subsamples which could be averaged together to generate that representative value. In the preferred embodiment of the present invention, an "X" sampling pattern (reference numeral 26 in FIG. 3b) is used for all subsequent calculations. This pattern usually provides superior, or at least equal, results to any of the other options shown.
In order to sample and detect asymmetrically distributed characteristics, the orientation processor 18 defines two sampling regions along each side of the image. These regions (TOP and BTTM) are shown in FIG. 4. As can be seen, in this embodiment each region is six "samples" deep, although other depths of regions are also used in the preferred embodiment. Each sample comprises the five subsamples in the "X" pattern 26 of FIG. 2b. These subsamples are each 2×2 averages of the original pixels. Separate observations of each desired scene characteristic with a potential asymmetry in the distribution are made in each of the 32 lines from an image. A net count is kept for each characteristic which is found to exceed predetermined limits defining the asymmetry as the processing progresses across the image. If a value of the characteristic value is high in one direction, a counter is incremented; if it is high in the opposite direction, the counter is decremented. The net count for each of the different characteristics is then a measure which could be related to the probability that the bottom (or top) is on a particular side.
If an individual image is in fact in a portrait orientation, then the asymmetries should disappear because the characteristics would either be equally valued, or be randomly valued, on both the top and the bottom. This tends to result in null measures of the potential asymmetry.
Certain scene characteristics are more asymmetrically distributed than others, and thus are better predictors of the true orientation of the set of images (the whole order) than other scene characteristics. If it were possible to find one or more highly reliable characteristics in every scene, then this approach could be used to determine the orientation of individual scenes, although such characteristics are not presently known. However, for the full order orientation problem solved by the present invention, it is not necessary to find a useful characteristic in every image. Except for the extremely occasional odd instance, the landscape orientation of all the images in an entire order are either one side up or the other side up. Thus, by finding a reliable indication in even one image in an order, it is possible to determine the landscape orientation of the entire order using the present invention. Furthermore, even if the results from any single image are not conclusive, the preponderance of the evidence from several images may still be a reliable indication of the orientation for that order.
The success of an entire order orientation method depends upon finding reasonably reliable characteristics often enough in most orders so that a usefully accurate prediction of each order's (or set of images) orientation can be made. It also depends upon having a method for combining the orientation evidence (if any) from each image in one order. In embodiments of the present invention, the method of combining the evidence uses a Bayesian probability propagation model to sum up the results found in each of the frames of an order (set of images). Equations (1) below illustrates Bayes's Rule. ##EQU1##
1. Po is the probability of knowing the correct orientation for the order before any image is examined. This could be based, for instance, on the mix of right side load vs left side load cameras being processed at a given location.
2. Plast is the probability of knowing the correct orientation for the order before the current image is examined.
3. Pframe is the probability of knowing the correct orientation for the current image given the results of the observations made on that image.
4. Pcurrent is the probability of knowing the correct orientation for the order including the results from the observations made on the current image.
Bayes' Theorem requires that the probabilities refer to a set of mutually exclusive and exhaustive alternatives. In this application there are only two mutually exclusive alternatives, either the whole order is "right side up", or it is "up side down". If the P's refer to the probability of it being "right side up", then (1-P) is the probability that it is "up side down".
There are a number of advantageous characteristics of Bayes's Rule. First it provides a simple way to keep score for the entire order as the evidence from individual images is accumulated. When no useful evidence is found in the current image, then using Pframe =0.5 results in no change to Pcurrent. Second, it is easy to see that the rule is commutative (i.e., if Plast and Pframe are interchanged, then the value of Pcurrent remains unchanged). More importantly, Bayes's Rule is also associative. This means that the net calculation from a series of images does not depend on the order in which it is done. Thus all the evidence from the images in an order can be accumulated and a probability of the orientation based purely on that evidence can be calculated before the effect of a mix of cameras (i.e. left side load vs. right side load) is introduced. This can be done simply by temporarily setting P0 at the start of each order to 0.5. Ultimately the effect of the mix (i.e., the initial prior probability) can be included at the end. Its value can be adjusted at each user location as an independent parameter.
There are a number of factors which influence how well the present invention works. The following is a list of such factors:
1. The probability that a given characteristic correctly predicts the true orientation.
2. The probability of encountering the given characteristic in a typical mix of consumer images.
3. The correlation between successive encounters of the given characteristic within a customer order.
4. The number of images in the order.
5. The number of different characteristics that may be utilized.
6. The method to combine evidences from multiple characteristics found within a single image.
Pursuant to the present invention, when two or more characteristics are found in a single image, the joint probability for predicting the orientation correctly is not estimated. Instead, the probability associated with the most reliable characteristic is used. Future research may include trying a different method for assigning the orientation probability in those cases. It is possible, however, to provide for such an estimate of the joint probability.
The probability that a given characteristic correctly predicts the true orientation depends not only on the "truthfulness" of the characteristic for landscape type images, but also on the relative fraction of the time that the characteristic is mistakenly found in portrait type images. While certain characteristics are rarely found in portrait type images, this is not the case for other characteristics. Because of this, and because the mix of landscape and portrait types could vary from user to user or even for a given user slowly over time, the raw landscape probability for each characteristic must be adjusted to account for this fact. Equation (2) gives the necessary correction. It is based on the assumption that when an asymmetry measure is unfortunately found in a portrait free, it will give the correct answer half the time and an incorrect answer the other half of the time. ##EQU2##
1. PT is the probability that the characteristic (measure), "M", gives the correct orientation for the given landscape and portrait mix.
2. PM is the probability that the characteristic, "M", gives the correct orientation for landscape images only.
3. fL is the fraction of all images which are landscape type.
4. fP/L is the ratio of portrait type to landscape type images in which characteristic "M" is found.
It has been determined that characteristics could be found whose "truthfulness" varied somewhat inversely with their frequency of occurrence. This presents the question as to which trade-off is a better strategy; finding less reliable characteristics in more images, or finding more reliable characteristics in fewer images. In experiments by the inventors on images where two different characteristics are "found", the probability associated with the more reliable characteristic was used. It was found that the higher the probability that a given characteristic correctly predicts the orientation, the better. The more frequently that a reasonably reliable characteristic is found, the fewer orders will be found with weak or no evidence, but the more orders with grossly wrong evidence will be found as well. (Grossly wrong evidence is defined as generating a probability for the orientation which is so wrong in magnitude that no reasonable prior probability could save that order from being improperly oriented.) The fewer the images in the order, the poorer the results (as expected). Considering the level of reliability of the characteristics found to date, excellent results should be obtained from orders with at least 24 images. If finding a characteristic in a order increases the probability of finding it again in that order (correlated scene content within an order), the overall performance over many orders does not change much, but the orders with grossly wrong evidence does increase. When either of two characteristics, or both, may be found, the performance of the less reliable one will dominate the creation of gross mistakes, but at the same time there are far fewer orders with little or no evidence.
For a whole order orientation method to be viable, it must work on the most popular order size which is 24 images. The results of simulations have suggested that what is needed is either a sufficiently large group of characteristics so that one with at least an 80% success rate is found in nearly every image, or a smaller group of characteristics with a 90% success rate which can be found in about 40% of all images. Furthermore, simulations have shown that predicting the orientation of three or four image film "chops" would require a group of characteristics which were found in over 90% of all images and all of which resulted in success rates exceeding 90%.
When one looks at a lot of images, one notices that some characteristics occur more frequently at the top of scenes and others occur more frequently at the bottom. Grass color and sky color are examples. For those characteristics that casual observations indicate have an asymmetrical distribution, the problem is reduced first to defining an exact measure for that characteristic and then to estimating the probability that exact measure correctly identifies the scene orientation. However, other useful characteristics may exist and be used in addition to those described below.
The exemplary embodiment of the present invention uses scene characteristics that have are asymmetrically distributed top to bottom. These scene characteristics involve both colors and (lack of) textures. The color aspect will be described first. The color axis may be defined by taking simple differences of the normalized primaries, and the hue range can be defined by simple comparisons of values. Even with these restrictions, one can define twelve color axes (i.e., the sign of the color difference is included in the definition of each unique color axis). One can also define three hue ranges about each color axis. Each color axis may be surrounded by a 60, a 120, or a 180 degree region. The color axis need not be centered on the hue region. To assign values to them, colors in the hue region are projected onto the color axis. FIG. 5 illustrates these color axes and hue regions. The test for hue boundaries defined by the primary color axes are shown to be comparisons between two colors. The test for hue boundaries defined by the color difference axes are not shown, but they are simply a test of whether a primary color is positive or negative.
Asymmetrically distributed grass and sky colors are found in a small fraction of all images. In addition, saturated red and blue is found to occur more often in the bottom of images than in the top. This may be due to the association of red and blue with human articles, coupled with the tendency of humans to be gravity bound to the lower portions of images. Unexpectedly the red color asymmetry occurs about as frequently as the green and blue color asymmetries combined. In experiments on a stored database of exemplary images, many fewer instances of the red asymmetry (out of the thirty-two potential instances in each image) had to be observed in order to signify a useful orientation probability.
The optimum color axis and the optimum hue range to define each color asymmetry is illustrated in FIG. 6 for the exemplary images. For grass, the yellow green axis was slightly preferable to the pure green axis (i.e., pure green is given by G-(R+B)/2).
Equations (3) describe how the three colors are sampled and computed. Each primary color is normalized in certain embodiments of the invention by corresponding values from a scene balance algorithm before being used to define the chroma signal actually employed in the method of the invention. ##EQU3##
1. RUL, RUR, RCTR, RLL, RLR, are the five "X" subsample values.
2. RX, GX, and BX are the values assigned to each location in the 21×32 sampling grid.
3. R, G, and B are the estimated aims from the scene balance algorithm module.
4. RDMG180 is the red magenta chroma values defining the red color characteristic.
5. YLGR180 is the yellow green chroma values defining the green color characteristic (associated with grass in the scene).
6. BLU120 is the blue chroma values defining the blue color characteristic (associated with blue sky in the scene).
The chroma value used is only one aspect of the characteristic. Other characteristics include the distribution of the chroma values, and their modulation. Equations (4) define the simple red, simple green, and simple blue measures used. As the equations show, all three measures are based upon counting the net number of times the difference of the two maxima found in the six samples at the top and the bottom of the image exceed a given threshold. (The term "top" and "bottom" refer to the opposite sides of the image. They are nominal terms only and are based on the scan direction relative to, say, left side load camera systems.)
Equations (4) and the subsequent equations are illustrative of the case when the regions are six samples deep (FIG. 4). When they are more or less than six samples deep, the equations should be modified in the obvious ways. ##EQU4##
1. RDMGtop max, RDMGbttm max are the maximum values of RDMG180 found at the top and the bottom of each sampling line.
2. YLGRtop max, YLGRbttm max are the maximum values of YLGR180 found at the top and the bottom of each sampling line.
3. BLUtop max, BLUbttm max are the maximum values of BLU120 found at the top and the bottom of each sampling line.
4. BLUtop ave, BLUbttm ave are the average values of BLU120 found at the top and the bottom of each sampling line.
5. ThreshRDMG, ThreshYLGR, ThreshBLU are the thresholds by which the corresponding chroma values on one side of the image must exceed the chroma values on the other side of the image in order to define a potential asymmetry.
6. RDMGcnt, YLGRcnt, BLUcnt are color measures which signal the asymmetry if it is found. They represent the net number of times the corresponding chroma value exceeds the threshold on the expected side of the image.
Since the chroma values compared are defined to have only positive magnitudes, or 0, then BLUcnt, for instance, can never be incremented or decremented by two YELLOW samples one of which is simply less yellow than the other, etc..
The results of experiments on different characteristics for a database of images showed that the red characteristic requires a far lower net count than the green characteristic in order to signal a useful asymmetry of distribution. In fact the simple green color must be found across virtually the entire image in order to have a useful asymmetry. However, when it is found, the probability of knowing the correct orientation is very good (around 95%). FIG. 7 illustrates that the probability tends to increase as the number of lines in which asymmetry is found increases. Furthermore, the probabilities generally increase as the threshold is raised. FIG. 8A illustrates this for the simple green color when it is found in all 32 sampling lines. Unfortunately, of course, the number of images found with that degree of difference goes down as is shown in FIG. 8B.
Table 1 below illustrates a sampling of results typical of those used to decide on the exact definition of the "red" characteristic. It shows the fraction of images and the probabilities for several possible definitions of the red characteristic using results obtained by combining results into one coarse and one finer histogram cell for various threshold ranges.
The red characteristics are defined as follows:
1. RDMG180 the red characteristic defined above (i.e., a red characteristic projected onto the R-G axis).
2. RDMG120 the same definition as RDMG180 except restricted to a 120 hue range surrounding the R-G axis.
3. RDMGrd120 the same definition as RDMG180 except restricted to a 120 hue range centered on the R-(G+B)/2 axis.
4. REDry120 is a red characteristic which is projected on the R-(G+B)/2 axis and is restricted to a 120 hue range centered on the R-B axis.
None of the definitions of red result in the best performance (highest percent of images with the highest probability) for all combinations of threshold range and net count. However, RDMG180 is generally among the best in all the ranges listed. In the present invention, similar judgements are made for each of the characteristics that are to be used in the determination of orientation.
TABLE 1______________________________________COMPARISON OF PERFORMANCE OFSEVERAL RED CHARACTERISTICS THRESHOLD PERCENT PROBABILITYCHARAC- RANGE & ALL RED ONTERISTIC NET COUNT FRAMES BOTTOM______________________________________RDMG180 400-600 6.3 .800RDMG120 net count 6.9 .786RDMGrd120 6 to 32 6.3 .787REDry120 n/a n/aRDMG180 600-800 2.9 .858RDMG120 net count 3.2 .836RDMGrd120 6 to 32 3.0 .860REDry120 3.5 .826RDMG180 >800 2.2 .932RDMG120 net count 2.6 .925RDMGrd120 6 to 32 2.2 .932REDry120 2.2 .870RDMG180 600-800 1.9 .836RDMG120 net count 1.9 .818RDMGrd120 4 to 5 1.9 .813REDry120 2.3 .831RDMG180 >800 1.3 .828RDMG120 net count 1.5 .794RDMGrd120 4 to 5 1.3 .834REDry120 1.6 .832______________________________________
Another asymmetry which may be observed in typical consumer images is that unmodulated areas tend to be found at the top of scenes more often than at the bottom. Equations (5) define a characteristic which attempts to capture this asymmetry. It defines a uniformity characteristic based on the GREEN Log Exposure values within each sampling grid. Since color is not an issue, scene normalization is not needed. ##EQU5##
1. GkX is the average Green Log Exposure for each of the 5 components of the "X" subsamples from the top or bottom of the sampling line.
2. ΔGkX is the absolute deviation of each element from their average for each of the 5 components of the "X" subsamples from the top or bottom of the sampling line.
3. ΔGi ave is the average deviation at each of the "X" subsamples from the top or bottom of the sampling line.
4. ΔGtop max, ΔGtop min, ΔGbttm max, ΔGbttm min are the maximum and minimum average differences found in the top and bottom of the sampling line.
5. ΔGtop ave, Gbttm ave are the average of the average differences found in the top and bottom of the sampling line.
6. XNR is a (standardized) measure of the skew of the distribution of the values making up ΔGtop ave and Gbttm ave. XNR ranges between -1 and 1.
7. ΔGtop adjave, ΔGbttm adjave are the average of the average differences after adjustment for excessive skewness which is found in the top and bottom of the sampling line.
8. Glowlim is an image value slightly exceeding the image value resulting from underexposure of a low reflectance object.
9. Thresh1 lower threshold and Thresh2 is the upper threshold which defines the potential asymmetry in smoothness.
10. ΔGcnt is the smoothness measure which signals the asymmetry if it is found. It represents the net number of times a smooth region was found on the expected side of the image.
There are an almost unlimited number of ways to define a region with low modulation. In the above equations, an adjustment is made using a parameter called XNR. This is a statistic used for a number of years in known scene balance algorithms. It detects, among other things, low modulance snow scenes which for many years were printed too dark by automated printers. Equations (6) further illustrate the use of XNR to reduce, or eliminate, the effect of one unusually high value from an average of n otherwise similar values. By its original definition, XNR could range in value from ##EQU6## In the present invention, XNR has been standardized to a value of -1 to 1. Its utility arises when the six samples in a line lie in a basically uniform region which, however, touches a modulated region on one end or the other, or which crosses a minor modulation such as a power line. In these instances, the single misleading modulation is discounted by the adjustment. This type of adjustment is also made in defining some of the other measures to be described below. ##EQU7## Where:
1. XNRstd is the standardized measure of skew of the distribution of the values to possibly be adjusted.
2. Vmax, Vmin, Vave are the maximum, minimum, and average of the distribution of values to possibly be adjusted.
3. n is the number of values making up the sample.
4. Vadj is the value of Vave adjusted for excessive positive skew. The adjustment is obviously a linear interpolation between the average including Vmax and the average excluding Vmax.
Returning to the simple uniformity characteristic, its definition of smoothness may be confused by a uniform gradient, such as is produced by a point source illuminating an extended uniform surface, or a heavily vignetting lens. The final characteristic to be described was designed to recognize a uniform linear gradient. FIG. 9 illustrates this concept. There are six "X" samples in both the top and bottom of each sampling line. A Chi Squared like test is run to test the hypothesis that the values of the subsamples lie on a linear gradient. The eighteen expected levels are defined by a linear interpolation (extrapolation) between two end points defined by the centers of the first and last "X". The expected values assigned to these two centers are simply the average of the five values making up the "X" sample at that location. Equations (7) define the smooth gradient measure. ##EQU8##
1. Gstrt0 aim, Gstrt15 aim are the average of the five subsamples at the start of the top and bottom of each line.
2. Gtop grad, Gbttm grad are the aim gradients between the start and end of the top and bottom sections of each line.
3. Gtopj X.spsb.2, Gbttmj X.spsb.2 are the measures of deviation of the actual data from the expected linear gradient.
4. Thresh.sub.Δ1, Thresh.sub.Δ2 are the threshold on the deviations from the linear gradient which define asymmetry of the distribution of regions of smooth uniform gradients.
5. Glowlim is an image value slightly exceeding the image value resulting from underexposure.
6. GX.spsb.2cnt is the smooth uniform gradient measure which signals the asymmetry if it is found. It represents the net number of times a relatively smooth uniform gradient is found on the expected side of the image.
Table 2 lists the measures used in an exemplary embodiment of the present invention. This table shows the thresholds, the counts signaling the asymmetry in the distribution, the region depth in samples, the probability that this measure predicts the correct orientation in landscape images, the probability of finding this measure in a landscape image, and the ratio of the probabilities of finding this measure in a portrait image to finding it in a landscape image. The characteristics Gsmth and Ggrad are segregated into two categories depending on the value of BLU(120)ave from equations (4). The threshold categories combined with the count limits generate unique categories within a measure type. No measure is used that has a landscape probability below 0.80 of predicting the correct orientation.
TABLE 2__________________________________________________________________________ORIENTATION ALGORITHM THRESHOLDSAND PROBABILITIES % OF PORTRAIT EXCESS REGION PROB LANDSCAPE LANDSCAPEMEASURE THRESHOLD LINES DEPTH CORRECT FRAMES RATIO #__________________________________________________________________________RDMG 400-800 5-7 4 0.849 4.4% 1.02 1(180) 400-600 8-10 6 0.864 1.7 1.20 2" 400-600 11-32 7 0.881 2.7 0.48 3" 600-800 11-32 7 0.915 1.2 0.56 4" 600 8-10 6 0.941 1.7 0.51 5" 800 5-7 4 0.983 1.5 0.38 6" 800 11-32 7 0.999 0.5 0.88 7YLGR 100-275 32 5 0.884 3.7 0.06 1(180) 275 27-31 4 0.912 0.9 0.13 2" 275 32 5 0.963 0.7 0.001 3BLU(120) 675 5-32 6 0.999 0.6 0.20 1Gsmth 5.5-10.5 4-6 5 0.960 2.5 1.20 1(BLU120 > 7.0-8.0 17-24 4 0.961 2.6 0.08 260) 5.5-10.5 7-10 4 0.964 2.1 1.10 3" 7.0-8.0 11-16 4 0.982 2.8 0.43 4" 7.0-8.0 25-32 4 0.999 2.6 0.001 5Gsmth 5.5-10.5 4-6 5 0.918 6.5 0.82 6(BLU120 < 5.5-10.5 7-10 6 0.959 3.7 0.67 760) 7.0-8.0 25-32 4 0.959 1.8 0.12 8" 5.5-10.5 17-24 4 0.960 2.5 0.18 9" 5.5-10.5 11-16 4 0.962 3.3 0.54 10Ggrad 0.08-0.68 7-22 4-5 0.878 1.2 0.54 1(BLU120 > 0.08-0.68 4-6 4 0.899 3.3 0.98 260) 0.08-0.68 7-22 5 0.965 4.3 0.33 3" 0.08-0.68 23-32 4 0.999 1.4 0.001 4Ggrad 0.08-0.68 7-22 4-5 0.861 2.7 0.97 5(BLU120 < 0.08-0.68 4-6 4 0.875 6.2 0.78 660) 0.08-0.68 7-22 5 0.889 6.1 0.42 7" 0.08-0.68 23-32 4 0.968 0.8 0.14 8__________________________________________________________________________
The below described test by the inventors made use of two data bases. The first data base (Database 1) was generated from an actual prescan (128×192) of a set of negatives and contains 2183 images. The entire data set is in three files (a red pixel file, a green file, and a blue file). The second data base (Database 2) was generated by averaging down, from 1308×1932 to 128×192, the data from 2697 individual images scanned by the a 35 mm area array scanner. It was assembled by sampling (usually) nine images from about 290 different customer orders. This data was converted to the same Scene Log Exposure metric as was used for Database 1, and was assembled into several red, green, and blue pixel data files.
In the test of the present invention on the sample databases of images performed by the inventors, it was determined whether the characteristics used in the inventive method are independently useful. It was found that for the most part, the characteristics are independently useful.
Table 3 is an attempt to address that question. It has separate columns for Database 1 and Database 2, representing different sets of images.
There is a similarity of results for the two databases. Each array of numbers shows the results when the measure at the top left of the array was the one used to predict the orientation for those particular images. Each row shows the results either for that same measure (marked by **) or for the other measures which happened to also be found in the same images but which actually represented a less reliable indication of the orientation in those particular images. The first column (column #0) is the total number of images in which that measure was found, and the following columns are the numbers of images for each range of the measure defined in Table 2. The last column in Table 2 gives the number of the column in Table 3 associated with that range. For instance in the array for Database 1 labeled RDMG180 there were 210 images for which RDMG180 was the characteristic used to predict the orientation for that image. Furthermore, 75 times the first measure defined in Table 2 (with a probability of 0.849 of giving the correct orientation) was the measure used, 26 times the second measure from Table 2 was used, etc. Thus 210=75+26+35+15+22+27+10. Other less reliable measures were also found in some of those 210 images. For instance, 15 times the Gsmth measure was also found. However in everyone of those 15 cases, the probabilities associated with those measures were lower than the probability of the RDMG180 measure used.
The results shown in Table 3 may be used to illustrate two considerations. The first is the practical effectiveness of each of the measures used in the orientation method of the invention. The table shows that for most of the measures there were some images for which that measure was the only indication of orientation. For instance consider Gsmth. Even after subtracting all the other instances where a measure with a lower probability could have been used, there are still at least 128 images in Database 1 (128=620-51-44-0-397) and at least 13 images in Database 2 (13=371-41-26-0-291) for which there was no other measure found.
TABLE 3__________________________________________________________________________ DATABASE 1 DATABASE 2__________________________________________________________________________RDMG(180)**RDMG 210 75 26 35 15 22 27 10 0 0 0 232 81 48 26 18 24 25 10 0 0 0YLGR 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0BLU 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0Gsmth 15 1 0 0 1 0 7 3 0 1 2 7 0 0 0 0 0 6 0 0 0 1Ggrad 9 0 1 0 0 0 7 1 0 0 0 8 0 0 0 0 5 2 1 0 0 0YLGR(180)RDMG 0 0 0 0 0 0 0 0 0 0 0 9 4 2 3 0 0 0 0 0 0 0**YLGR 66 48 9 9 0 0 0 0 0 0 0 40 20 13 7 0 0 0 0 0 0 0BLU 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0Gsmth 9 3 0 4 2 0 0 0 0 0 0 3 0 2 0 1 0 0 0 0 0 0Ggrad 8 0 5 3 0 0 0 0 0 0 0 5 1 1 3 0 0 0 0 0 0 0BLU(120)RDMG 0 0 0 0 0 0 0 0 0 0 0 5 2 0 0 0 1 1 1 0 0 0YLGR 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0**BLU 8 8 0 0 0 0 0 0 0 0 0 15 15 0 0 0 0 0 0 0 0 0Gsmth 2 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0Ggrad 1 0 0 0 0 0 1 0 0 0 0 3 0 1 0 0 1 1 0 0 0 0GsmthRDMG 51 30 9 6 0 1 4 1 0 0 0 41 14 13 6 4 3 0 1 0 0 0YLGR 44 32 3 9 0 0 0 0 0 0 0 26 17 7 2 0 0 0 0 0 0 0BLU 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0**Gsmth 620 23 44 20 37 64 154 79 23 75 101 371 27 41 14 56 35 97 38 1 24 38Ggrad 397 18 34 53 16 62 75 139 0 0 0 291 39 25 65 10 30 59 63 0 0 0GgradRDMG 9 4 3 2 0 0 0 0 0 0 0 14 7 4 0 1 1 0 1 0 0 0YLGR 1 1 0 0 0 0 0 0 0 0 0 4 4 0 0 0 0 0 0 0 0 0BLU 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0Gsmth 42 1 2 5 1 0 10 2 18 1 2 23 0 3 5 1 0 5 0 6 2 1**Ggrad 155 5 14 9 3 21 47 35 21 0 0 139 4 25 8 4 22 48 20 8 0 0__________________________________________________________________________
The second consideration is whether the occurrence of the characteristics in each image are independent events or not. For some characteristics such as Gsmth and Ggrad, it is obvious that they are not independent. Although definition of measures which are independently distributed across all images would be useful, this typically is not possible. What is required is that each measure contributes enough unique information to the orientation detection process to justify the expense of its calculation.
Although an exemplary embodiment of the present invention has been described with the use of specific scene characteristics helpful in determining orientation, other equally valid approaches to detecting the orientation of individual scenes, and thus the entire order, may be taken within the scope of the invention. For instance, if human faces can be recognized, then their orientation in the image would be a very reliable indicator of the orientation of the scene. Alternatively, some vertical vs horizontal asymmetry in the frequency content of edges could be detected.
The present invention provides a method and device that determines the orientation of entire customer orders by looking for scene characteristics that are distributed asymmetrically top to bottom in landscape scenes. The asymmetry in the distribution translates into probability estimates for the orientation of each scene in which it is found. The probability of orientation for each order is calculated from the probability of orientation for each frame using the Bayesian probability propagation model. The hope is to detect asymmetrically distributed characteristics in enough scenes in each order so that the Bayesian probability propagation model predicts the orientation for that order with a high degree of reliability.
The present invention uses image processing components that are known to those of ordinary skill in the art, and which are readily programmable by those of ordinary skill to perform the method of the present invention.
Although the invention has been described and illustrated in detail, it is to be clearly understood that the same is by way of illustration and example, and is not to be taken by way of limitation. The spirit and scope of the present invention are to be limited only by the terms of the appended claims. |
MATLAB is a high-performance language and it integrates programming, visualisation and computation environment. It also has high-end data structures, built-in editing and debugging tools and supports object-oriented programming. All in all, you may need some time to get a solid grip on this programming language owing to the complications involved. Don’t worry; our MATLAB assignment writing help service is right here at your disposal. We have handpicked the most experienced MATLAB assignment experts to help you with your MATLAB assignment easily. Get in touch with us in case you need immediate assignment help on MATLAB.
MyEssayAssignmentHelp.com Assignment Help Service Main Advantages
✍️ Professional Writers
500+ top-notch authors
✅ Plagiarism-Free Policy
Only original work
⏰ On-Time Delivery
Strict deadlines compliance
☝️ Safe Payments
Secure SSL encryption
❎ No Hidden Charges
Without extra fees
Opt For Our MATLAB Assignment Writing Help Service
We cover all the basics of MATLAB
Our MATLAB assignment writing help service covers all the basic concepts of this language. You can take your time to get familiar with how this language works. Meanwhile, get MATLAB assignment help from us to get your projects done. Check out the basics covered by our MATLAB assignment writing help service.
The Major Tools
Many students opt for our MATLAB assignment writing help service because they aren’t familiar with the tools available on MATLAB. The following tools are the major ones.
The COMMAND WINDOW
The CURRENT DIRECTORY
The COMMAND HISTORY
The HELP BROWSER
These tools are required to perform very simple calculations. Avail our MATLAB assignment writing help service if you don’t have the time to get the hang of how these tools operate.
Basic Arithmetic Operators
You may be asked to use MATLAB as a calculator to perform a simple interactive calculation. Our MATLAB assignment writing help service is here if you don’t have the time to use the basic arithmetic operators, such as
Our MATLAB assignment writing service is well-versed with all the arithmetic operators. Get MATLAB assignment help from us today if you find it hard to use these operators.
Our MATLAB assignment writing service is skilled enough to execute all the additional operations. If you are unable to perform any of the following additional operations on MATLAB, just ask us to do my assignment on MATLAB.
Creating MATLAB variables
Controlling the hierarchy of operations
Controlling the appearance of a floating-point number
This is just a glimpse of the additional operations our MATLAB assignment help covers. No matter what your question is, our MATLAB assignment writing service manages them all.
Your MATLAB projects may be based on any of these basic concepts. Don’t worry if you don’t have the time to go through the basics. Avail our MATLAB assignment writing service. We provide the best MATLAB assignment help to students all around Australia.
Do You Need Immediate Help With MATLAB Assignment?
Contact us at your convenience
We understand that solving MATLAB problems is no easy feat for students with a busy schedule. Therefore, you can ask for help with MATLAB assignments at MyEssayAssignmenthelp.com. Besides the basic concepts, we provide help with MATLAB assignments for other aspects as well. Check them out.
At times, Australian professors ask their students to plot the results of computation or a given data set on MATLAB. Our MATLAB assignment helpers are familiar with the ins and outs of solving mathematical equations with graphics. We offer help with MATLAB assignments for the following plotting tasks.
Adding titles, axis labels, and annotations
Multiple data sets in one plot
Specifying line styles and colors
Do you need help with MATLAB assignments for basic plotting tasks? Get MATLAB assignment help from us.
Matrices are the fundamental aspects of MATLAB. Ask for help with my assignment on MATLAB if you are not familiar with matrix manipulation and generation. Check out the tasks we usually work on.
Entering a vector
Entering a matrix
Transposing a matrix
Don’t panic if you are unable to get accurate MATLAB assignment solutions. Get help with MATLAB assignments from our experts.
MATLAB consists of two types of arithmetic operations. We provide help with MATLAB assignment for both. Check them out.
Matrix arithmetic operations
Array arithmetic operations
Do you want to learn the technique of solving these operations? Talk to our MATLAB online tutors. We will assist you in the best way possible.
Tell us your MATLAB assignment topic and we will assign a suitable writer to the task. Get MATLAB assignment help from us and score excellent marks in this project.
Share Your Assignment Requirements With Our Chat Executive
The programming aspects of MATLAB are quite confusing. Many students get MATLAB assignment help from us to get their programming-related topics done with efficiency. Our MATLAB assignment experts are flexible with all the control flow structures required to write a program on MATLAB. Have a look.
The “If…End” Structure
MATLAB supports the three variants of “if” structure. You may be asked to solve programming questions based on the forms of the ‘if’ statement. Our MATLAB assignment experts follow the right form while executing this structure in the programs.
Relational And Logical Operators
These operators compare two numbers by determining if a comparison is true or false. Each relational and logical operator has a specific description. Connect with our MATLAB assignment experts if you are not familiar with the operators.
The “For…End” Loop
This structure is used when a command needs to be repeated at a fixed and predetermined number of times. Let our MATLAB assignment experts help you out if you are unable to execute this control flow structure.
The “While…End” Loop
We use this loop when the number of passes has not been specified. Hire our MATLAB assignment writers to avoid any errors in these loop structures. We proofread and edit your assignment thoroughly before sending it to you.
Opt for our MATLAB assignment help to get your programming-related topics solved easily. Our team is experienced and skilled enough to deduce accurate solutions in your MATLAB assignments. Bid adieu to MATLAB blues with our MATLAB assignment help.
Avail MATLAB Homework Help From Us
And enjoy our client-centric features
An assignment can take many forms. Homework is one of them. Thus, we offer unmatched quality MATLAB homework help to our clients. Get MATLAB homework help from us to enjoy the following benefits.
Our MATLAB homework help is available at reasonable charges. We have compared the prices offered by our competitors to bring the most affordable price to you. Get MATLAB homework help from us even if you have a tight budget.
Our team works on your MATLAB assignments systematically. We deliver the task right on time. Feel free to opt for our MATLAB homework help even if you have an urgent deadline.
Sign-Up Bonuses & Other Offers
You can earn $20 when you sign up with us. We also offer referral rewards and loyalty bonuses, along with our MATLAB assignment help. Give us a call to know more about our services.
Round The Clock Support
Our MATLAB assignment help online is available 24*7 for you. You can get in touch with us whenever you need our assistance. We don’t keep our clients waiting. Therefore, we respond instantly.
Our team is familiar with all Australian universities. We solve MATLAB assignments based on your university guidelines. Order your assignment with us to get 100% accurate MATLAB answers within your deadline.
Answer: MATLAB is a multi-paradigm programming language that is used to generate simulations. It helps in generating arrays and is useful for mathematics and computation. You can perform algorithm development, modeling, prototyping, data analysis, engineering graphics, and much more. Usually, you have to type very precise codes (including Boolean functions) to execute the programs.
Question: Is there any Matlab assignment writing service provider?
Answer: Yes, academic service providers like MyAssignmenthelp.com, Essayassignmnethelp.com, and Tophomeworkhelper.com offers MATLAB assignment assistance. The experts are highly qualified, and they are familiar with the functions, data types, and intricate programming syntax of MATLAB. Hence, they can generate simulations in the blink of an eye and offer timely assistance. Moreover, they can execute imaging, engineering graphics, and carry out data analysis.
Question: How do I create a Matlab application?
Answer: You can create MATLAB applications in various ways. You can use the Application Compiler App and use the installer for a standalone application. Or, you can use the compiler.build.standalone application function. This does not include any MATLAB Runtime or Installer. Finally, you can try the MCC command to produce a standalone executable. If you want to package the files and create an installer, you can use compiler.package.installer.
Question: What are the applications of Matlab?
Answer: MATLAB is a versatile programming language, meant for the engineering and mathematics niche. It is used for application development, like Graphical User Interface building. It is also relied upon by engineers for imaging, math and computation, algorithm development, modeling, simulation, data analysis, visualization, and scientific and engineering graphics. Usually, the coding is not very intricate as compared to C, or Python, and is very precise.
Question: How can I get an online expert's help for my Matlab assignment?
Answer: If you want help with your MATLAB assignment, you should seek our expert assistance. The experts can model real-world systems, conduct data analysis, design algorithms, etc. Thus, if you struggle with your assignment or projects, do not hesitate to place an order. All you have to do is specify the requirements and make the payment. Once you do this, our experts get to work.
Question: What are the advantages of using Matlab?
Answer: MATLAB is very straightforward and calls for precise coding for the generation of simulations. It finds application in engineering and mathematical domain and is useful for algorithm development, data analysis, simulation, imaging,etc. It is an interactive system and the basic data element is an array that does not require dimensioning. Thus, you can solve many technical computing problems, involving matrix and vector formulations, instantly. |
Factoring. Factoring a polynomial allows us to rewrite it in a more manageable form. Remarks. For polynomials in one variable, finding the factors is equivalent to finding the roots: is a root of a polynomial if and only if is a factor of . A polynomial of degree has at most roots, and so at most factors. Example In mathematics, factorization or factoring is the breaking apart of a polynomial into a product of other smaller polynomials. If you choose, you could then multiply these factors together, and you should get the original polynomial (this is a great way to check yourself on your factoring skills) A polynomial equation of degree two is called a quadratic equation. Listed below are some examples of quadratic equations: x2 + 5x + 6 = 0 3y2 + 4y = 10 64u2 − 81 = 0 n(n + 1) = 42. The last equation doesn't appear to have the variable squared, but when we simplify the expression on the left we will get n2 + n Previous section Factoring ax 2 + bx + c Next section Factoring Polynomials of Degree 3. Take a Study Break. Every Shakespeare Play Summed Up in a Quote from The Office; Every Marvel Movie Summed Up in a Single Sentence; QUIZ: Are You a Hero, a Villain, or an Anti-Hero? QUIZ: Which Greek God Are You? 60 YA Movie Adaptations, Ranked; Pick 10 Books and We'll Guess Whether You're an Introvert or.
More generally, let be an arbitrary distribution on the interval , the associated orthogonal Polynomials, and , , the fundamental Polynomials corresponding to the set of zeros of a Polynomial 66. In the case of the above Polynomial division, the zero remainder tells us that x + 1 is a factor of x 2 - 9x - 10, which you can confirm by factoring the original quadratic dividend, x 2 - 9x - 1 Basically, if there's a factor like (k − 5 3), then since k = 5 3 would make that factor zero, it better make the whole polynomial zero as well. So if you know another way to find zeros/roots of the polynomial, you can use that to solve the problem. As Battani implied, the quadratic formula is one method we can use Any real polynomial can be expressed as a product of quadratic and binomial factors like $(x+a)$ and $(x^2 + bx + c)$. Given a polynomial, is there an algorithm which will find such factors? For example, how can I express $x^4 +1$ in the form $(x^2 + bx + c)(x^2 +dx + f)? Factoring polynomial equations worksheet. Classwork practice packet lesson 1. Factoring quadratic polynomials worksheet. Before look at the worksheet if you would like to learn how to factor quadratic polynomials. Solving quadratic equations by factoring. Showing top 8 worksheets in the category solving polynomial equations. The monthly worksheet can be used by individuals who should create a. Factoring in a sentence 1. It was just 6% of total world factoring volumes in 1991, according to Factors Chain International. 2. Because of this, factoring is the most expensive form of accounts receivable financing. 3. Once the policy of factoring is established, the factor will dictate credit.
SAT Math Help » Algebra » Variables » Polynomials » Factoring Polynomials Example Question #1 : Variables. What is a possible value for x in x 2 - 12x + 36 = 0 ? Possible Answers: There is not enough information. 2. 6 -6. Correct answer: 6. Explanation: You need to factor to find the possible values for x. You need to fill in the blanks with two numbers with a sum of -12 and a product. Factoring Polynomials Using the Greatest Common Factor (GCF) There are several methods that can be used when factoring polynomials. The method that you choose, depends on the make-up of the polynomial that you are factoring. In this lesson we will study polynomials that can be factored using the Greatest Common Factor Factoring polynomials is the reverse procedure of multiplication of factors of polynomials. An expression of the form ax n + bx n-1 +kcx n-2 + .+kx+ l, where each variable has a constant accompanying it as its coefficient is called a polynomial of degree 'n' in variable x We show that if $f(x)$ is a polynomial in $Z [ \alpha ][ x ]$, where $\alpha $ satisfies a monic irreducible polynomial over Z, then $f(x)$ can be factored over $Q(\alpha )[ x ]$ in polynomial time. We also show that the splitting field of $f(x)$ can be determined in time polynomial in ([Splitting field of $f(x): Q $], $\log | (x) |$) It's difficult to see factorable in a sentence . It is entirely possible that, when looking for a given polynomial's roots, we might obtain a messy higher-order polynomial for S ( x ) which is further factorable over the rationals even before considering irrational or complex factorings
the decision problem of \_ sentences over general integral domains is polynomial time reducible to factoring integers over Z and factoring polynomials over finite fields: two of the best known problems in computer algebra. This work provides quite complete answers to nearly all the known decidable cases over integral domains. For some cases, their decidabilities were previousl I call it factoring in pairs, but your book may refer to it as factoring by grouping. By whatever name, this technique is sometimes useful, but mostly it is helpful as a means of introducing how to factor quadratics, which are degree-two polynomials. Or, at least, most textbook authors seem to feel that this is a helpful step along the way. nature of the roots of a polynomial. We use skills such as factoring, polynomial division and the quadratic formula to find the zeros/roots of polynomials. In future lessons you will learn other rules and theorems to predict the values of roots so you can solve higher degree polynomials! Title: PowerPoint Presentation Author : Monica Cates Created Date: 8/4/2015 12:13:24 PM. Factoring is a process of changing an expression from a sum or difference of terms to a product of factors. Note that in this definition it is implied that the value of the expression is not changed - only its form. REMOVING COMMON FACTORS OBJECTIVES. Upon completing this section you should be able to: Determine which factors are common to all terms in an expression. Factor common factors. In. Section 1-5 : Factoring Polynomials. For problems 1 - 4 factor out the greatest common factor from each polynomial. 6x7 +3x4 −9x3 6 x 7 + 3 x 4 − 9 x 3 Solution. a3b8−7a10b4 +2a5b2 a 3 b 8 − 7 a 10 b 4 + 2 a 5 b 2 Solution. 2x(x2+1)3−16(x2 +1)5 2 x ( x 2 + 1) 3 − 16 ( x 2 + 1) 5 Solution
in the following polynomial identify the terms along with the coefficient and exponent of each term so the terms are just the things being added up in this polynomial so the terms here let me write the terms here the first term is 3x squared the second term it's being added to negative 8x you might say hey wait isn't it minus 8x and you can just do that is it's being added to negative 8x so. Factoring Polynomials - The quickest route to learning a subject is through a solid grounding in the basics. So what you wont find in this book is a lot of endless drills. Instead, you get a clear explanation that breaks down complex concepts into easy-to-understand steps, followed by highly focused exercises that are linked to core skills-enabling learners to grasp when and how to apply those. Factoring out the gcf id. 21×3 14×2 49x 18. A whole number greater than 1 that has more than two positive factors b. 3×3 18 16. For example 20 2 2 5 and 30 2 3 5. Factoring polynomials any natural number that is greater than 1 can be factored into a product of prime numbers. Finding the greatest common factor of polynomials in a.
Solving polynomials and factoring is an essential skill that is needed in any high school or college level math course and is even used in some science classes. Understanding factoring can help any student understand the importance of polynomials and their application in the real world. More importantly, students will also get a sense of the many applications of mathematics in everyday life. Factoring Quadratic Polynomial - Displaying top 8 worksheets found for this concept. Some of the worksheets for this concept are Factoring polynomials, Factoring trinomials a 1 date period, Factoring polynomials gcf and quadratic expressions, Factoring quadratic expressions, Factoring practice, Factoring polynomials, Factorize each quadratic p 4p 5 g 3gh, Solving quadratic equations by factoring
From trivia in factoring polynomials to graphing linear inequalities, we have got every aspect discussed. Come to Algebra-net.com and study subtracting polynomials, basic mathematics and a good number of other math topic Polynomials Factoring. Polynomials Factoring - Displaying top 8 worksheets found for this concept.. Some of the worksheets for this concept are Factoring polynomials, Factoring polynomials gcf and quadratic expressions, Factoring trinomials a 1 date period, , Factoring practice, Factoring polynomials, Factoring polynomials 1, Factoring quadratic expressions
Therefore, when solving quadratic equations by factoring, we must always have the equation in the form (quadratic expression) equals (zero) before we make any attempt to solve the quadratic equation by factoring. Returning to the exercise: The Zero Factor Principle tells me that at least one of the factors must be equal to zero. Since at least one of the factors must be zero, then I can set. When a polynomial is set equal to a value (whether an integer or another polynomial), the result is an equation. An equation that can be written in the form ax 2 + bx + c = 0 is called a quadratic equation.You can solve a quadratic equation using the rules of algebra, applying factoring techniques where necessary, and by using the Principle of Zero Products . Module MapModule Map Here is a simple map of the lessons that will be covered in this module: Special Products Applications Factoring Square of a Binomial Sum and Difference of. 50 Factoring Polynomials Worksheet Answers one of Chessmuseum Template Library - free resume template for word education on a resume example ideas, to explore this 50 Factoring Polynomials Worksheet Answers idea you can browse by Template and . We hope your happy with this 50 Factoring Polynomials Worksheet Answers idea
I've wondered about the real purpose of factoring for a long, long time. In algebra class, equations are conveniently set to zero, and we're not sure why. Here's what happens in the real world: In algebra class, equations are conveniently set to zero, and we're not sure why Factoring. We have been multiplying polynomials by using the Distributive Property, where all the terms in one polynomial must be multiplied by all terms in the other polynomial. Now, you will start learning how to do this process using a different method called factoring. Factoring is a technique that t akes the factors that are common to all the terms in a polynomial out of the expression. Trinomials Factoring Worksheet. Posted in worksheet, April 9, 2020 by mikasa Factoring worksheet algebra. factor where the of x is one. x red x this is not a quadratic because there is an exponent that is red text greater than. factoring where a objective. solving literal equations by factoring pg. printable in convenient format
Factoring Polynomials Homework Help An essay can be written in 1 hour, just say the word. Also, you'll be glad to know that more than 35% of orders are done before the deadline and delivered to you earlier than Factoring Polynomials Homework Help planned. We charge no money for early delivery and only Factoring Polynomials Homework Help wish that you're pleased with how fast we were able to. We know how important any deadline is to you; that's why everyone in our company has their tasks and Unit 7 Polynomials And Factoring Homework 10 Factoring Review Answer Key perform them promptly to provide you with the required assistance on time.Unit 7 Polynomials And Factoring Homework 10 Factoring Review Answer Key We even have an urgent delivery option for short essays, term papers, or. Mar 26, 2015 - Factoring Patterns are shown for: • Difference of Squares • Perfect Square Trinomial - Sum • Perfect Square Trinomial - Difference • Difference of Cubes • Sum of Cubes Practice problems are included for each type of pattern. Answer Sheet is provided. ~~~~~ Dawn Designs Polynomial Worksheets.. The Unit 7 Polynomials & Factoring Homework 11 Factoring Polynomials Mixed expert Unit 7 Polynomials & Factoring Homework 11 Factoring Polynomials Mixed essay tutors at Nascent Minds will elaborate every single detail to you. They will teach you how to write precisely. We are offering quick essay tutoring services round the clock. Only premium essay tutoring can help you in attaining desired. Divide both sides by 2: x = −1/2. And that is the solution: x = −1/2. (You can also see this on the graph) We can also solve Quadratic Polynomials using basic algebra (read that page for an explanation). 2. By experience, or simply guesswork. It is always a good idea to see if we can do simple factoring
You Unit 1 Fundamental Skills Homework 2 Factoring Polynomials Answers are also not alone in discovering that writing this type of paper is really difficult. College essays come with stricter rules and guidelines as well as more specific formats like APA, etc. Writing college papers can also take up a lot of your time and Unit 1 Fundamental Skills Homework 2 Factoring Polynomials Answers with. This video introduces students to polynomials and terms.Part of the Algebra Basics Series:https://www.youtube.com/watch?v=NybHckSEQBI&list=PLUPEBWbAHUszT_Geb.. Algebraic Identities A B 3 A B 3 Factoring Polynomials Simplifying Algebraic Expressions Algebraic Expressions . As factoring is multiplication backwards we will start with a multipication problem and look at how we can reverse the process. Factoring general trinomials worksheets with answers. Factoring trinomials where a 1 objective. Plus model problems explained step by step. If you need.
Homework Help Factoring Polynomials essays online? Of course, to look for the best custom writing service available out there. This could be challenging as there are plenty of options available, Homework Help Factoring Polynomials and not all of them are equally great. Keep in mind that while a good writing service should be affordable to you, it definitely shouldn't be the cheapest you can. Unit 7 Polynomials And Factoring Homework 5 Factoring Polynomials Answer Key Writing Read more>> You may also like. June 22, 2018 . Writer's Choice . How To Write Good Examples of Book Reviews. Evaluating examples of book reviews: Unit 7 Polynomials And Factoring Homework 5 Factoring Polynomials Answer Key the detailed examination of the actual review found on a professional critical approach.
Solving Polynomial Equations by Factoring . To solve polynomial equations of second or higher degree by factoring, we • arrange the polynomial in decreasing order of powers on one side of the equation, • keep the other side of the equation equal to 0, • factor the polynomial completely, • use the zero-product property to form linear equations for each factor, • solve the linear. Factoring Quadratic Equations - Methods & Examples. Do you have any idea about the factorization of polynomials? Since you now have some basic information about polynomials, we will learn how to solve quadratic polynomials by factorization. First of all, let's take a quick review of the quadratic equation. A quadratic equation is a polynomial of a second degree, usually in the form of f(x.
A polynomial is made up of terms and each term has a coefficient, while an expression is a sentence with a minimum of two numbers and at least one math operation in it. The expressions which satisfy the criterion of a polynomial are polynomial expressions. Let's see the following examples to check if they are polynomial expressions or not. Examples Polynomial Expressions or not; x 2 + 3√x. Factoring Polynomials Games. These games are designed to make factoring polynomials fun, active, and sometimes just a little competitive. Polynomial Ta Simplifying radicals square root of 25 Usage of calculator tosolving staticstical formulas, algebra program ( Example: ) solving equations worksheet for grade 8, second order homogeneous differential equation [ Def: A mathematical sentence built from expressions using one or more equal signs (=). ], McDougal-Littell Algebra 2 Teacher's Edition, solve nonlinear differential equation [ Def: A.
Jeannine Lanphear Algebra I Chapter 9 - Algebra Tiles and Factoring Polynomials Unit Plan Page 1/11 1 Unit Plan - Backward Design (UbD) Factoring Polynomials - Chapter 9 - Algebra I BASIC INFORMATION Unit Title: Chapter 9 - Algebra I Unit Theme: Factoring Polynomial Expressions Subject Areas Addressed: Mathematics - Algebra I Content Area Extensions: Art- Piet Mondrian's color block. The box method makes factoring quadratic polynomials a bit easier. It is a more visual way to factor a quadratic polynomial , which is a polynomial where the highest exponent is 2
Algebrator software, free introduction to matrices workbook, math algebra poems, free printable 6th grade math practice, factoring 3rd order polynomials, free algebra worksheet money. Math statistics homework answers, quadratic equations using matlab, gnuplot multiply, difficult factoring worksheet, rationalizing the denominator in trigonometry, math poems • Multiplying Polynomials • Factoring • Completing the Square • Dividing Polynomials OH-2. Base Ten - Multiplication Area Model Example 1: Multiply 13 • 15 Patterns: Example 2: Multiply 21 • 14 Patterns: OH-3 . Base Ten - Generic Model Example: 18 • 12 Example: 146 • 57 OH-4. Diamond Problems Can you find the Pattern? 18 -6 2 6 3 -2 3 .5 4 9 1 4.5 When you think you know it. Overview: Factoring Polynomials In order to factor polynomials, it is important to find the greatest common factors and use the distributive property. Use the integral coefficients to rewrite the polynomial and find the factors. The difference between finding prime factors of a real number and prime polynomials is that there are variables involved. Multiplying back the number sentences is a.
4.8 Applications of Polynomials The last thing we want to do with polynomials is, of course, apply them to real situations. There are a variety of different applications of polynomials that we can look at. A number of them will not get treated until later in the text, when we have more tools for solving than we do now. In the meantime, we still have plenty of applications to keep us busy. Let. matching shapes worksheets. geometry triangle congruence worksheets. multiplying decimals worksheets grade 7. writing prompts worksheets. decimal to percent worksheet. sat vocab worksheets. word choice worksheets. product and quotient rule worksheet. Factoring grouping presentation free download polynomials worksheet. Math factoring polynomials quiz worksheet answer grouping Factoring Polynomials Help With Homework, dissertation abstracts vocabulary laerning, resume writing service help, knowing oneself is the beginning of all wisdom quote is in what essay' Full-time support. If you decide to buzz the support in the middle of the night, they will be there to answer your call. We are determined to make the clients happy. College Topic title: Get your paper in time. Factoring - Grouping Objective: Factor polynomials with four terms using grouping. The first thing we will always do when factoring is try to factor out a GCF. This GCF is often a monomial like in the problem 5xy + 10xz the GCF is the mono-mial 5x, so we would have 5x(y + 2z). However, a GCF does not have to be a monomial, it could be a binomial. To see this, consider the following two. Section4.2 Factoring the Greatest Common Factor. ¶ Factoring a polynomial is the reverse action to expanding the product of two or more polynomials. For example, using the FOIL expansion we know that. (x − 3)(x + 7) = x2 + 4x − 21. ( x − 3) ( x + 7) = x 2 + 4 x − 21. So if we were asked to factor x2 + 4x − 21 |
The mean value of these temperature measurements is then: (23.1°C+22.5°C+21.9°C+22.8°C+22.5°C) / 5 = 22.56°C Variance and Standard Deviation Now we want to know how uncertain our answer is, that is to The basic idea of this method is to use the uncertainty ranges of each variable to calculate the maximum and minimum values of the function. If a wider confidence interval is desired, the uncertainty can be multiplied by a coverage factor (usually k = 2 or 3) to provide an uncertainty range that is believed to The possibilities seem to be endless.Random errors are unavoidable. http://stevenstolman.com/error-analysis/error-analysis-average.html
Send comments, questions and/or suggestions via email to [email protected] Do you know if the data normally distributed? –ahoffer Jan 13 '12 at 22:06 I do not. For example, we could have just used absolute values. The best precision possible for a given experiment is always limited by the apparatus. http://lectureonline.cl.msu.edu/~mmp/labs/error/e1.htm
So we will use the reading error of the Philips instrument as the error in its measurements and the accuracy of the Fluke instrument as the error in its measurements. A procedure that suffers from a systematic error is always going to give a mean value that is different from the true value. Error Analysis Introduction The knowledge we have of the physical world is obtained by doing experiments and making measurements. For this example, ( 10 ) Fractional uncertainty = uncertaintyaverage= 0.05 cm31.19 cm= 0.0016 ≈ 0.2% Note that the fractional uncertainty is dimensionless but is often reported as a percentage
A valid measurement from the tails of the underlying distribution should not be thrown out. For example, it would be unreasonable for a student to report a result like: ( 38 ) measured density = 8.93 ± 0.475328 g/cm3 WRONG! Bork, H. Error Analysis Physics Questions So how do we report our findings for our best estimate of this elusive true value?
Because experimental uncertainties are inherently imprecise, they should be rounded to one, or at most two, significant figures. A quantity such as height is not exactly defined without specifying many other circumstances. If Z = A2 then the perturbation in Z due to a perturbation in A is, . (17) Thus, in this case, (18) and not A2 (1 +/- /A) as would http://www.webassign.net/question_assets/unccolphysmechl1/measurements/manual.html Does it mean that the acceleration is closer to 9.80000 than to 9.80001 or 9.79999?
share|improve this answer answered Jan 15 '12 at 20:03 onur güngör 1011 We're looking for long answers that provide some explanation and context. Measurement And Uncertainty Physics Lab Report Matriculation X1 = 23.1°C, X2 = 22.5°C, and so on. Before this time, uncertainty estimates were evaluated and reported according to different conventions depending on the context of the measurement or the scientific discipline. The use of AdjustSignificantFigures is controlled using the UseSignificantFigures option.
Whole books can and have been written on this topic but here we distill the topic down to the essentials. Thus the error in the estimated mean is 0.0903696 divided by the square root of the number of repeated measurements, the square root of 4, which is numerically 0.0451848. Error Analysis Standard Deviation If a coverage factor is used, there should be a clear explanation of its meaning so there is no confusion for readers interpreting the significance of the uncertainty value. Average Error Formula Note that the relative uncertainty in f, as shown in (b) and (c) above, has the same form for multiplication and division: the relative uncertainty in a product or quotient depends
Note that this means that about 30% of all experiments will disagree with the accepted value by more than one standard deviation! check over here However, they were never able to exactly repeat their results. Most analysts rely upon quality control data obtained along with the sample data to indicate the accuracy of the procedural execution, i.e., the absence of systematic error(s). Standard Deviation Not all measurements are done with instruments whose error can be reliably estimated. Error Analysis Physics Class 11
The mean is chosen to be 78 and the standard deviation is chosen to be 10; both the mean and standard deviation are defined below. Would the error in the mass, as measured on that $50 balance, really be the following? An example is the measurement of the height of a sample of geraniums grown under identical conditions from the same batch of seed stock. his comment is here Recall that to compute the average, first the sum of all the measurements is found, and the rule for addition of quantities allows the computation of the error in the sum.
Example from above with u = 0.2: |1.2 − 1.8|0.28 = 2.1. How To Calculate Uncertainty In Physics To avoid this ambiguity, such numbers should be expressed in scientific notation to (e.g. 1.20 × 103 clearly indicates three significant figures). For example, most four-place analytical balances are accurate to ± 0.0001 grams.
if the first digit is a 1). In:= Out= In this formula, the quantity is called the mean, and is called the standard deviation. Therefore, uncertainty values should be stated to only one significant figure (or perhaps 2 sig. Measurement And Error Analysis Lab Report However, all measurements have some degree of uncertainty that may come from a variety of sources.
Taking the square and the average, we get the law of propagation of uncertainty: ( 24 ) (δf)2 = ∂f∂x2 (δx)2 + ∂f∂y2 (δy)2 + 2∂f∂x∂f∂yδx δy If the measurements of And in order to draw valid conclusions the error must be indicated and dealt with properly. Do not waste your time trying to obtain a precise result when only a rough estimate is required. http://stevenstolman.com/error-analysis/error-analysis-immunochemistry-error-analysis.html In the case where f depends on two or more variables, the derivation above can be repeated with minor modification.
For this, one introduces the standard deviation of the mean, which we simply obtain from the standard deviation by division by the square root of n. This is more easily seen if it is written as 3.4x10-5. The definition of is as follows. Similarly for many experiments in the biological and life sciences, the experimenter worries most about increasing the precision of his/her measurements.
You are determining the period of oscillation of a pendulum. This usage is so common that it is impossible to avoid entirely. It is calculated by the experimenter that the effect of the voltmeter on the circuit being measured is less than 0.003% and hence negligible. So we get: Value = 1.495 ± 0.045 or: Value = 1.50 ± 0.04 The fact that the error in the estimated mean goes down as we repeat the measurements is
The mean is given by the following. The second question regards the "precision" of the experiment. An EDA function adjusts these significant figures based on the error. As a rule of thumb, unless there is a physical explanation of why the suspect value is spurious and it is no more than three standard deviations away from the expected
It is the degree of consistency and agreement among independent measurements of the same quantity; also the reliability or reproducibility of the result.The uncertainty estimate associated with a measurement should account In the case that the error in each measurement has the same value, the result of applying these rules for propagation of errors can be summarized as a theorem. Similarly the perturbation in Z due to a perturbation in B is, . Hence, taking several measurements of the 1.0000 gram weight with the added weight of the fingerprint, the analyst would eventually report the weight of the finger print as 0.0005 grams where
Thus we arrive at the famous standard deviation formula2 The standard deviation tells us exactly what we were looking for. Thus we have = 900/9 = 100 and = 1500/8 = 188 or = 14. Date | Prediction | Standard Error ----------------------------------------- Jan-01-2003 | 24.8574 | 10.6407 Jan-02-2003 | 10.8658 | 3.8237 Jan-03-2003 | 12.1917 | 5.7988 Jan-04-2003 | 11.1783 | 4.3016 Jan-05-2003 | 16.713 | For a sufficiently a small change an instrument may not be able to respond to it or to indicate it or the observer may not be able to discern it.
Wolfram Knowledgebase Curated computable knowledge powering Wolfram|Alpha. Thus, the accuracy of the determination is likely to be much worse than the precision. The error estimation in that case becomes a difficult subject, one we won't go into in this tutorial. |
Aspects of Entanglement Entropy For Gauge Theories
A definition for the entanglement entropy in a gauge theory was given recently in arXiv:1501.02593. Working on a spatial lattice, it involves embedding the physical state in an extended Hilbert space obtained by taking the tensor product of the Hilbert space of states on each link of the lattice. This extended Hilbert space admits a tensor product decomposition by definition and allows a density matrix and entanglement entropy for the set of links of interest to be defined. Here, we continue the study of this extended Hilbert space definition with particular emphasis on the case of Non-Abelian gauge theories.
We extend the electric centre definition of Casini, Huerta and Rosabal to the Non-Abelian case and find that it differs in an important term. We also find that the entanglement entropy does not agree with the maximum number of Bell pairs that can be extracted by the processes of entanglement distillation or dilution, and give protocols which achieve the maximum bound. Finally, we compute the topological entanglement entropy which follows from the extended Hilbert space definition and show that it correctly reproduces the total quantum dimension in a class of Toric code models based on Non-Abelian discrete groups.
Colaba, Mumbai, 400005, India\preprint
Quantum systems are known to behave in essentially different ways from their classical counterparts. Entanglement is a key property that characterises quantum correlations and entanglement entropy provides a quantitative measure of some of these essential differences. For a bipartite system consisting of two parts , with Hilbert spaces, respectively, which is in a pure state, , the entanglement entropy is obtained by constructing the density matrix of one of the two parts, say , after tracing over the second one, ,
and computing its von Neumann entropy
The Hilbert space of the full system in this case is given by the tensor product
In a gauge theory the definition of the entanglement gets more complicated. It turns out that the Hilbert space of physical states, i.e., gauge-invariant states, does not admit a tensor product decomposition of the type eq.(3) in terms of the Hilbert space of states in a region and its complement. Thus, it is not clear how to compute the entanglement described above. This is a very general feature in gauge theories and it is related to the presence of non-local operators like Wilson lines, which create non-local excitations, in these theories.
Several approaches have been suggested in the literature, see, Buividovich2008b ; Donnelly2011 ; Casini2013 ; Radicevic2014 ; Ghosh2015 ; Hung2015 ; Aoki2015 ; Casini2014 ; Donnelly2014 ; Donnelly2014b ; Donnelly2015 , to circumvent this problem and arrive at a satisfactory definition of entanglement entropy for a gauge theory. Here we will follow the approach discussed in Ghosh2015 (GST), see also Aoki2015 , which is also related to the earlier work, Buividovich2008b ; Donnelly2011 . In this approach, by working on a spatial lattice, a definition of entanglement entropy for a gauge theory is given as follows. One first embeds the space of gauge-invariant states in a bigger space obtained by taking the tensor product of the Hilbert spaces on each link of the lattice. This space, , by definition admits a tensor product decomposition in terms of the Hilbert spaces for the set of links of interest, which we denote as , and the Hilbert space for the rest of the links, . We can write, , which is analogous to eq.(3). It is then straightforward to obtain the density matrix, , eq.(1), by taking a trace over , and from it the entropy, eq.(7). The definition given in GST works for any set of links, including in particular a set of links which are adjacent or close to each other in space. We will refer to it as the extended Hilbert space definition below.
This definition has several nice features. It is gauge-invariant. It meets the condition of strong subadditivity. And it can be shown to agree with a path integral definition of entanglement which follows from the replica trick111 More accurately, since there can be ambiguities in the replica trick on the lattice, the extended Hilbert space definition agrees with the path integral implementing a particular version of the replica trick.. The definition applies to both Abelian and Non-Abelian theories and can be readily extended to gauge theories with matter.
The purpose of this paper is to explore the extended Hilbert space definition further and elucidate some of its properties.
A different approach for defining the entanglement entropy was given in the work of Casini, Huerta and Rosabal (CHR) Casini2013 , see also Radicevic2014 . In this approach one works with gauge-invariant states, and the algebra of operators acting in the region and its complement . For a gauge theory it was argued that this algebra has a non-trivial centre, and this centre is responsible for the space of gauge-invariant states not having a tensor product decomposition, eq.(3). One can overcome this obstacle by diagonalising the centre and going to sectors where it takes a fixed value. The Hilbert space of gauge-invariant states in each sector then does admit a tensor product decomposition, leading to a definition of the density matrix and the entanglement entropy. The different sectors are in fact superselection sectors since no local gauge-invariant operators, acting on either the inside or the outside, can change the sector. It turns out that different choices of the centre are possible and give different definitions for the entanglement on the lattice. A particular choice, called the electric centre, corresponds to specifying the total electric flux entering the links of interest at each vertex lying on the boundary. By Gauss’ law this is also equal, up to a sign, to the total flux leaving the links of interest at the boundary vertices. This choice leads to the electric centre definition of the entanglement entropy.
For Abelian theories it was shown that the extended Hilbert space definition agreed with the electric centre definition, Casini2013 ; Ghosh2015 . The extended Hilbert space definition can also be expressed as a sum over sectors carrying different electric fluxes into the region of interest, and the contributions from each such sector agrees with what arises in the electric centre definition for the Abelian case. As mentioned above, the extended Hilbert space definition also works for Non-Abelian theories. In this paper we show how the resulting entanglement entropy in the Non-Abelian case can also be expressed as a sum over different sectors with each sector specifying, in a gauge-invariant way, the electric flux entering the inside links. We then try to extend the electric centre definition, by analogy from the Abelian case, to the Non-Abelian case, and again obtain a result which is a sum over the different electric flux sectors. However, interestingly, we find that the two definitions lead to different contributions in each sector, resulting in a different final result for the entanglement entropy.
The difference between the two definitions is tied to an interesting subtlety which arises in the Non-Abelian case. In both the Abelian and Non-Abelian cases the entanglement entropy gets contributions of two kinds. One kind is a sort of “classical” term which arises due to the system having a probability for being in the different superselection sectors. The second kind is due to the entanglement entropy within each superselection sector. In the Non-Abelian case, in the extended Hilbert space definition, the classical term in turn has two contributions. These give rise to the first two terms in eq.(32), and eq.(33), which will be discussed in section 3 in more length. One contribution is analogous to the Abelian case and arises due to the probability of being in the different superselection sectors. The other contribution, which is intrinsically a feature of the Non-Abelian case, arises as follows. In sectors which carry non-trivial electric flux at a boundary vertex, the inside and outside links which meet at this vertex by themselves transform non-trivially under gauge transformations, but together must combine to be gauge-invariant. Since in the Non-Abelian case representations of the symmetry group are of dimension greater than one this results in non-trivial entanglement between the inside and outside links and the resulting additional contribution to the entropy. This contribution was also discussed in Donnelly2011 and Hung2015 ; Aoki2015 . It turns out that this contribution of the classical kind is absent in the electric centre definition.
In quantum information theory an operational measure of entanglement entropy is provided by comparing the entanglement of a bipartite system with a set of Bell pairs. This comparison is done by the process of entanglement distillation or dilution. It is well known that for a system with localised degrees of freedom, like a spin system or a scalar field theory, the resulting number of Bell pairs produced in distillation or consumed in dilution, agrees with the definition given above, eq.(7). More correctly, this is true in the asymptotic sense, when one works with copies of the system and takes the limit. Only local operations and classical communication (LOCC) are allowed in these processes.
It was already mentioned in Ghosh2015 , see also Casini2013 , that for gauge theories the extended Hilbert space definition is not expected to agree with such an operational definition. In particular the total entanglement extracted in the form of Bell pairs in distillation or dilution should be smaller. The difference arises because physical LOCC operations can only involve local gauge-invariant operations and these are insufficient to extract all the entropy. In this paper we also explore this question in greater depth. As was mentioned above, the entanglement entropy in the extended Hilbert space definition gets contributions of two kinds, a classical contribution, and a quantum contribution due to entanglement within each superselection sector. We show that in the asymptotic sense mentioned above, all the entanglement of the second kind can be extracted in distillation or dilution, but none of the contribution of the first kind is amenable to such extraction. We make this precise by presenting protocols which achieve the maximum number of extracted Bell pairs for distillation and dilution. This result is not unexpected, and was anticipated in the Abelian case, Casini2013 , since local gauge-invariant operators cannot change the superselection sectors and therefore are not expected to be able to access the entropy contribution of the first kind.
Since the full entanglement entropy in the extended Hilbert space definition cannot be operationally measured one might wonder whether the correct definition of entanglement is obtained by simply keeping that part which does agree with the measurements. It has been shown that the entanglement entropy can be useful for characterising systems with non-trivial quantum correlations in states with a mass gap. For example, the entanglement entropy has been used to define the topological entanglement entropy Kitaev2005 ; Levin2005 which in turn is related to important properties of the system like the degeneracy of ground states on a manifold of non-trivial topology, and the total quantum dimension of excitations in the system. In section 5 we calculate the topological entanglement entropy for a class of Toric code models Kitaev1997 based on non-Abelian discrete groups using the extended Hilbert space definition. We show that the correct result is obtained. This result gets contributions from both the classical and non-classical terms in general. In the contribution from classical terms both the first and second terms in eq.(33) which were mentioned a few paragraphs above contribute. Thus dropping these terms, which cannot be measured in distillation or dilution, would not lead to the correct answers for various physical questions tied to entanglement in these systems.
This paper is organised as follows. We discuss the extended Hilbert space definition in section 2, then compare to the electric centre definition which is given for the Non-Abelian case in section 3. A discussion of the entanglement distillation and dilution protocols follows in section 4. An analysis of the Toric code models is presented in section 5. We end in section 6 with a discussion, for good measure, of the entanglement in the ground state of an gauge theory to first non-trivial order in the strong coupling expansion. We find that to this order the entanglement arises solely due to the classical terms.
2 The Definition
In this section we review the extended Hilbert space definition of entanglement entropy given in GST, see also Aoki2015 .
2.1 General Discussion
We work in a lattice gauge theory in the Hamiltonian framework. The degrees of freedom of the theory live on links of a spatial lattice. Gauge transformations are defined on vertices, . Physical states are gauge-invariant and satisfy the condition
These physical states span the Hilbert space of gauge-invariant states, denoted by .
An extended Hilbert space is defined as follows. The degrees of freedom on each link form a Hilbert space . The extended Hilbert space is then given by where the tensor product is taken over all links. We are interested in defining the entanglement entropy of a set of links, which we loosely call the “inside links.” The remaining links, not in the inside, are the “outside links.” In fig. 1, which shows a square lattice for example, the inside links are shown as solid lines, while the outside links are shown as dashed lines.
admits a tensor product decomposition in terms of the Hilbert space of the links of interest, and its complement, which is the Hilbert space of outside links, . Also, if is the orthogonal complement of then can be written as the sum
To define the entropy we regard , then trace over to get the density matrix
The entanglement entropy is then defined as
This definition has several nice properties that were reviewed in GST. It is unambiguous and gauge-invariant. And it meets the strong subadditivity condition. We also note that the density matrix correctly gives rise to the expectation value of any gauge-invariant operator which acts on the inside links. This is because is orthogonal to all states in .
can be expressed in a manifestly gauge-invariant manner in terms of superselection sectors as follows. We define inside vertices as those on which only links securely in the inside set end, outside vertices as those on which only outside links end, and boundary vertices as those on which some links inside and some outside links end. In fig. 1, for example, is a boundary vertex on which three inside links and one outside link end. A superselection sector is specified by a vector which gives the total electric flux entering the inside links at all the boundary vertices. Labelling the boundary vertices as , the entry of is the total electric flux entering the inside at the boundary vertex. Then the inside Hilbert space can be written as a sum
where is the subspace of corresponding to the sector with flux .
It can be shown that is block diagonal in the different sectors and takes the form
where acts on .
can then be expressed as
Then eq.(10) gives
The different sectors specified by the electric flux are called superselection sectors because no gauge-invariant operator acting on the inside links alone can change these sectors. On the other hand gauge-invariant operators which act on both inside and outside links can change these sectors, e.g., Wilson loops that extend from the inside to the outside links.
The description above applies to all gauge theories without matter. These include discrete Abelian theories, like a theory, continuous Abelian theories, e.g., a theory, and finally Non-Abelian gauge theories, like an gauge theory.
2.2 Non-Abelian Theories
For the Abelian cases a specification of the flux at each boundary vertex is straightforward. On the other hand, for the Non-Abelian case there is an important subtlety which we now discuss. This subtlety has to do with the fact that the irreducible representations in the Non-Abelian case are not one-dimensional.
Consider as a specific example the group, where each link, , has as its degree of freedom a matrix . The corresponding state in the Hilbert space is denoted by . There are two sets of operators , and , which acts as generators of the algebra, acting on the left and right respectively,
The links are oriented to emanate from vertex and end on . It is easy to see that with the definitions above
Also, one can show that,
The gauge transformation at is then generated by
where the sum on the RHS is over all links emanating from .
Now consider a boundary vertex, . An example is the vertex shown in Fig. 1. The total electric flux carried to outside from is given by
where the sum is over all links which are in the outside set. Similarly the total flux carried inside is given by
with the sum being over all links going in.
Gauss’ law says that
The superselection sectors are then specified by a choice of for all boundary vertices . Since the operators, are gauge-invariant, this is a gauge-invariant characterisation of the superselection sectors.
Now we come to the subtlety. It is easy to see that the density matrix which arises for a gauge-invariant state by tracing over satisfies the relation
Similarly, all gauge-invariant operators acting on inside links must also commute with .
corresponding to the dimensional irreducible representation of . Then must furnish a corresponding representation of . These two features mean that can be written as a tensor product
where is a dimensional Hilbert space and label the boundary vertices. All gauge-invariant operators, and also , act non-trivially only on the last term in the tensor product, , and are trivial on the other factors. E.g., can be written as
where acts on , and denotes the identity operator acting on . A gauge-invariant operator acting on inside links alone cannot change the superselection sectors. In the sector this takes the form
where acts on .
On the other hand, , act non-trivially only on and not on the other factors,
where denotes a matrix in the dimensional representation of acting on the Hilbert space.
As a result the second term in eq.(14) can be further split in two. The normalised density matrix, was defined in eq.(11) , eq.(13), above by rescaling the density matrix. This operator can also be written in the form eq.(26),
with replacing . It is now convenient to further rescale and define the operator
It is easy to see from eq.(26) that this operator satisfies the condition
The entanglement entropy can then be written as a sum of three terms
Here denotes the different superselection sectors. The sum over index in the second term is over all boundary vertices, with being the value of the total incoming angular momentum, at the boundary vertex, related to by eq.(24). Note that the last trace is over the subspace.
The second term in eq.(32) then is the extra contribution which arises in the Non-Abelian case since the irreducible representations can have dimension greater than unity. Physically, the total angular momentum at a boundary vertex must sum to zero. This results in a correlation between the in-going and out-going links at the vertex. The in-going links together give rise to a state in the dimension representation and so do the out-going links. Then these two states combine to give a singlet of the total angular momentum emanating from the vertex. The requirement of being in a singlet state entangles the state of the ingoing and outgoing states non-trivially, since the representations are not one dimensional. This extra entanglement results in the second term in eq.(32).
We have chosen the example for simplicity. The generalisation to any Non-Abelian group is immediate. The analogue of , are the generators of the group acting on the ingoing and outgoing links. Superselection sectors are specified by giving the irreducible representations, , under which these generators transform, at all boundary vertices222More correctly if the outgoing links transform as the representation then the ingoing links must transform as the conjugate representation . Together these give the identity in a unique way. , . And in the second term in eq.(32), is replaced by where is the dimension of the representation. As a result the entanglement is given by
We end with some concluding comment. The first term in eq.(14) for the Abelian case is sometimes referred to as a kind of “classical” contribution to the entropy. It is the minimum entropy which would arise from a density matrix, eq.(9), which meets the condition, eq.(12). This minimum contribution would arise if , eq.(11), in each superselection sector corresponds to a pure state. Similarly, we may regard the first two terms in eq.(33) for the non-Abelian case also as a classical contribution. It too is the minimum contribution which arises for a density matrix which satisfies the condition, eq.(12), and which has the form eq.(26) required by eq.(23).
Gauge-invariant operators acting on only the inside or outside links cannot change the superselection sectors labelled by , and the probabilities or the values of . This suggests that the minimal contribution to the entanglement represented by the first term in eq.(14) and the first two terms in eq.(33) cannot be extracted in dilution or distillation experiments. We will see in section 4 that this expectation is indeed borne out.
3 The Electric Centre Definition
For the Abelian case a different approach for defining the entanglement entropy using only the gauge-invariant Hilbert space of states and not the extended Hilbert space was adopted in Casini2013 , see also, Radicevic2014 .
It was shown in Casini2013 and Ghosh2015 that the electric centre choice of CHR leads to a definition of entanglement entropy that coincides with the extended Hilbert space definition given above in the Abelian case. Here we examine the electric centre definition in the Non-Abelian case and find that it in fact differs from the definition given above, precisely in the second term on the RHS of eq.(33).
3.1 General Discussion
The essential point in the definition given by CHR is to focus on the algebra of gauge-invariant operators, , and , which act on the inside and outside links respectively. These algebras, it was argued, have a non-trivial centre,
for a gauge theory, which commutes with both and . This non-trivial centre is the essential obstacle to obtaining a tensor product decomposition of .
To circumvent this problem, in the definition given by CHR, one works in sectors where the centre is diagonalised. Within each sector, labelled by say index , the intersection in eq.(34) is now trivial, containing only the identity. It can then be argued that the subspace, , of states in this sector admits a tensor product decomposition
with the operators in in the sector taking the form
respectively. The full Hilbert space can then be decomposed as
and the density matrix of the inside can be written as a sum over components in the different sectors,
where each component is obtained by starting with and tracing over .
Depending on which choice is made for the centre, one gets different definitions of the entanglement entropy. As mentioned above, the electric centre choice of CHR leads to a definition of entanglement entropy that coincides with the extended Hilbert space definition given above in the Abelian case. The index introduced above for different sectors in this case coincides with the vector which gives the electric flux entering the inside at each boundary vertex.
From our discussion in section 2.1, see also GST, it follows that in the Abelian case by working in the extended Hilbert space and considering the space of gauge-invariant states , we get
Here , consists of all states lying in which are invariant under gauge transformations acting on the inside vertices. Similarly, consists of states invariant under gauge transformations acting on outside vertices.
3.2 Non-Abelian Case
We now turn to examining the electric centre definition in the Non-Abelian case. For concreteness we work again with the gauge theory; the generalisation to other Non-Abelian groups is quite straightforward. It is clear that here too there is a non-trivial centre in the algebra of gauge-invariant operators, , which act on the inside and outside links. The choice for the centre we make, which is the analogue of the electric centre choice in the Abelian case, is given by , where this set contains the quadratic Casimirs at each boundary vertex 333This fact was also pointed out in Radicevic2015 .
Physically is a gauge-invariant way to measure the electric flux entering a boundary vertex.
That these operators lie in the centre follows from noting that on gauge-invariant states, eq.(22) is valid and also that obviously commute with operators lying in , respectively. Diagonalising the operators, eq.(40) and working in sectors consisting of eigenstates with fixed eigenvalues of the centre then leads to superselection sectors, specified by a vector , with , eq.(24), specifying the eigenvalue of at . So far everything in the discussion is entirely analogous to what was done in the extended Hilbert space case; it also ties in with the general discussion of the CHR approach in the beginning of this section.
On general grounds, mentioned at the beginning of the section, one expects that the states of lying in each of these sectors should now admit a tensor product decomposition of the form eq.(35), with the label being identified with . The gauge-invariant operators restricted to this sector must also take the form, eq.(36).
To compare the electric centre definition we are developing here and the extended Hilbert space description above we now relate the spaces , , with those arising in the extended Hilbert space description discussed in section 2.2.
Notice that the extended Hilbert space can be written as
Just as can be expanded as eq.(25) so also can be written as,
There is a subspace of which is gauge-invariant with respect to all gauge transformations acting on the inside vertices, and similarly which is gauge-invariant with respect to all gauge transformations on the outside vertices.
We now argue that the spaces , can be identified with , respectively.
To show this we consider a gauge-invariant state which lies in the sector . This state can be embedded in the sector of the extended Hilbert space. In fact, since the state is invariant with respect to gauge transformations acting on the inside and outside vertices it lies in the subspace
Gauge-invariance at the boundary vertices requires that the components in , and similarly for all the other boundary vertices, pair up to be singlets under the boundary gauge transformations. This means that any state must be of the form
Here , with are a basis for respectively and so on. The coefficients are such that the state is a singlet with respect to gauge transformations at the vertex and similarly for the other boundary vertices. In the last term are bases in , respectively, so this term gives a state in . Note that for all states in this sector the first terms in the tensor product above remain the same and only the last term which involves the state in changes.
This shows that there is a one to one correspondence between states in the space , which arose in the electric centre discussion and states in , which we defined using the extended Hilbert space. The map is one-to-one because given any state in , we can obtain a state in by appending the first terms which ensure gauge-invariance with respect to boundary gauge transformations. Moreover from the form of a gauge-invariant operator , eq.(27) which acts on inside links and a similar form for operators acting on the outside links we see that acting on these operators take the form, , respectively in .
It then follows, from the general arguments at the beginning of the section, that , which we defined in the electric centre discussion above can be identified with , respectively,
We remind the reader that were defined by working in the extended Hilbert space, .
3.3 Entanglement Entropy
We now come to the main point of this section, namely the difference between the entanglement entropy which arises from the electric centre description given above and the extended Hilbert space definition of the entanglement entropy.
To calculate the difference we need to relate the density matrix in the two cases. The density matrix in the superselection sector for the extended Hilbert space is given in eq.(26). For a gauge-invariant state , it is easy to see that has support only on . Up to a normalisation, we now argue that the density matrix in a sector in the electric centre definition is in fact restricted to . By construction the expectation value for any gauge-invariant operator can be calculated by working in the extended Hilbert space. For an operator acting on the inside links we get
where we have used the fact that the trace on the RHS can be restricted to , since is gauge-invariant and has support only in . From eq.(11), eq.(12), eq.(30) it then follows that the density matrix in sector in the electric centre definition is given by
From eq.(31) and the fact that has support on only it follows that
It then follows that the entanglement entropy which in the electric centre definition is given by
Comparing we see that the second term in eq.(32) is missing. More generally, the second term in eq.(33) will be missing, as was mentioned at the start of this section.444The trace in the last term in eq.(32), eq.(33) can be restricted to since has support only on this subspace of . The physical origin of this missing term was discussed in section 2.2. It arises in the extended Hilbert space definition because the total angular momentum carried by inside and outside links meeting at a boundary vertex must sum to zero, giving rise to non-zero entanglement between the inside and outside links. In the electric centre definition we work with the Hilbert space of gauge-invariant states which can be decomposed only in terms of , eq.(48). As a result this additional contribution is absent.
4 Entanglement Distillation and Dilution
It is a standard practice to compare the entanglement in a bipartite system with a reference system typically taken to be a set of Bell pairs. The comparison is done using the processes of entanglement dilution and distillation which involve only Local Operations and Classical Communication (LOCC), see Bennett1996 ; PreskillTimeless ; NielsenBook ; WildeBook . For a system with local degrees of freedom, like a spin system, where the Hilbert space of states admits a tensor product decomposition, it is well known that the number of Bell pairs which are consumed in dilution or produced in distillation, is given by the entanglement entropy,
More accurately, this statement is true in the asymptotic limit where we take copies of the system and find that the number of pairs used up in dilution, , and those produced in distillation, , both tend to the limit,
where approaches the limit from above and from below. The starting point for distillation is that Alice and Bob share copies of the state and each has unentangled reference qubits. Distillation is the process of converting the qubits into Bell pairs by using up the entanglement in . The set-up for dilution is that Alice and Bob share Bell pairs and Alice has copies of the bipartite system AC in the state at her disposal. By LOCC operations the state of the system is then teleported to Bob, using the Bell pairs, resulting in the copies of state now being shared between Alice and Bob. The minimum number of Bell pairs required for this is . We then find that eq.(56) holds (these statements can be made more precise using ’s and ’s and the analysis below will be done in this more careful manner).
In contrast, it was argued in CHR and GST that for the extended Hilbert space definition of entanglement entropy eq.(56) is no longer true for gauge theories. This is because of the presence of superselection sectors which cannot be changed by local gauge-invariant operations, as reviewed in section 2. The existence of superselection sectors imposes an essential limitation on extracting the full entanglement entropy using dilution or distillation. In the Abelian case the entanglement entropy can be written as a sum of two terms given in eq.(14). Similarly in the Non-Abelian case it can be written as the sum of three terms given in eq.(33). One expects that the first term in eq.(14) in the Abelian case, and the first two terms in eq.(33) for the Non-Abelian case cannot be extracted using local gauge-invariant operations.
In this section we will carry out a more detailed investigation and find that this expectation is indeed borne out. For distillation we will find that the state shared between Alice and Bob at the end is still partially entangled with a residual entanglement entropy that cannot be reduced any further. This entropy is indeed given by the first term in eq.(14) for the Abelian case and the sum of the first and second terms of eq. (33) in the Non-Abelian case. In dilution we will find that Alice and Bob need to start with a partially entangled state so that at the end of the dilution process they can share copies of the state . The entanglement in this starting state in dilution, per copy, will be the same as the residual entanglement left after distillation. The number of Bell pairs extracted in distillation and consumed in dilution in this way will turn out to be equal asymptotically,
But this limit will not equal . Instead, the limit, i.e., maximum number of Bell pairs which can be extracted, will equal the second term in eq.(14) for the Abelian case, and the third term in eq.(33) in the Non-Abelian case.
A similar question relating entanglement to dilution and distillation for a simpler superselection rule, total number conservation, was answered by Schuch, Verstraete and Cirac (SVC) in Schuch2003 ; Schuch2004 . In that case, local operations cannot change the total number of particles on either Alice or Bob’s side. If a state has 1 particle on each side, then copies of the state can be transformed to another state with particles on each side; that is, there is still just one superselection rule. In our case, however, each copy has superselection sectors arising from the fact that the Gauss law constraint must be met in every copy. As a result, with copies we get sets of superselection sectors, one for each copy.
4.1 A Toy Model
We will find it convenient to carry out the analysis in a toy model consisting of just qubits. It will become clear as we proceed that our conclusion can be easily generalised also to Abelian and Non-Abelian gauge theories. In the discussion below, at the risk of introducing somewhat confusing conventions, we label a basis for each qubit, as and , with .
We will regard this qubit system, as a bipartite system, with the first two qubits being the first part, , and the third and fourth qubits being the second part, . In order to model the constraint of gauge-invariance we will impose a condition on this system. Namely, that the only allowed states are those for which the second and third qubits take the same value. Thus an allowed or physical state in our toy model has the form
and we have normalised so that
The constraint that the middle two qubits in eq.(58) are the same is the analogue of Gauss’ law, which in a gauge theory without matter implies that the electric flux leaving a region is the same as that entering its complement.
The allowed operators in this toy model are those which preserve the constraint and therefore change the middle two qubits together so that they continue to be equal. This is analogous the fact that the allowed operators in a gauge theory are gauge-invariant. As a result, the full algebra of allowed operators is generated by
Here we are using the notation that the superscript denotes the kind of Pauli matrices, , while the subscript indicates which qubit the matrices act on.
There are two subalgebras, and , which act on and , of allowed sets of operators, generated by
respectively. The centre of each subalgebra is generated by . The operators in and are the analogue of gauge-invariant operators which act entirely in the inside or outside links in the gauge theory. The operator is the analogue of a Wilson loop which crosses from one region to its complement changing the electric flux.
In the discussion below, in an abuse of terminology, to emphasise the analogy we will sometimes refer to the allowed operators in the algebra generated by as gauge-invariant operators and to the operators in the subalgebras as local gauge-invariant operators.
There are two superselection sectors in this toy model given by two values of the middle two qubits, in eq.(58). It is clear that the subalgebras , acting on or cannot change these superselection sectors. An important comment worth noting is that if we consider copies of , each copy will have its own superselection sectors, resulting in a total of superselection sectors.
Let us end with a few more important comments. Starting from in eq.(58) and tracing over we get a density matrix
for the state in . is block diagonal with the two blocks referring to the two superselection sectors where the second qubit takes values respectively. Each block is a matrix. are the unnormalised density matrices in these two sectors with
being the probabilities for the two superselection sectors.
Now consider an allowed local unitary operator . Under the action of this operator
Since cannot change the superselection sector it is therefore also block diagonal. We then conclude that any such operator leaves unchanged. Similarly, consider a generalised measurement. This corresponds to a set of operators, with
Under it the density matrix transforms as
An allowed measurement must be block diagonal in the basis above since it cannot change the superselection sector (i.e. each must be block diagonal). It then follows from eq.(69) that this also leaves unchanged, as discussed in SVC.
Next, let us write the state , eq.(58) in its Schmidt basis in every sector,
Note that the label takes two values, and has a superscript when it appears as a label for states because the Schmidt basis for the first and fourth qubit is independent in every superselection sector. E.g., .
It is easy to see that the Von Neumann entropy of the density matrix for in this state is
were the first term is,
and the second term is
These terms have the following interpretation. is the entropy which arises due to the probability of being in different superselection sectors. In fact, it is the minimum entropy which can arise from any density matrix, eq.(64), subject to the constraint eq.(65), eq.(66), which we have seen is preserved by any allowed unitary operator or measurement. on the other hand is the average of the entanglement entropies in the two superselection sectors weighted by the probability of being in these sectors.
The distillation and dilution protocols we will discuss next involve local gauge-invariant operators. Since, as noted above, these do not change , we do not expect these protocols to be able to extract the entropy in the first term, . Instead, an efficient protocol should at most to be able to extract the second term, . We will see that the protocols we discuss below indeed meet this expectation.
We had mentioned towards the end of section 2 that there are classical type of contributions which arise in the entanglement of gauge theories, corresponding to the first term in eq.(14) and the first two terms in eq.(33). The term, eq.(72), in the toy model we are considering here is in fact analogous to those contributions 555Actually, as we will see later, for the Non-Abelian case is analogous to the first term in eq.(33) which is also the classical type of contribution we get in the electric centre definition, eq.(54). eq.(33). We will return to the connection with gauge theories again at the end of this section after analysing the toy model in more detail.
4.2 The Distillation Protocol
We start by taking copies of the state denoted by with Alice having access to the copies in and Bob to the copies in . In the discussion below the different copies will be labelled by the index . Next, we consider a fuzzy measurement that Alice performs on her copies of the subsystem, corresponding to the operator
Here, we remind the reader that the label denotes the Pauli matrix, the subscript indicates that it acts on the qubit in eq.(58) and the label specifies which copy among the the operator acts on. If the result of this measurement is , then the state after the measurement is given by the symmetric superposition,
where indicates that the sum is over all lists , subject to the constraint that . The normalisation in eq.(75) takes the value
and is the total number of terms in the symmetric sum.
Next, Alice measures two additional variables,
Here stands for a projector which acts on the first qubit of the copy, with denoting the state of the first qubit in the Schmidt basis for the superselection sector with , see discussion after eq.(70) above. Similarly also stands for a projector of the copy, this time to a Schmidt basis state in the superselection sector
If these two measurements give rise to the results and respectively then the state after these measurements is a symmetric superposition of terms
were the sum is over all lists with the following properties:
Here indicates a restricted sum over where only those terms with contribute, while indicates a restricted sum where only those values of with contribute. Note that in eq.(79) is given by |
Review: "Calculus" by Ken Iverson
Iverson, Kenneth E., 1993. Calculus. Iverson Software Inc., 33 Major Street, Toronto M5S 2K9, Canada. 130 pp., $25.00 + 7% GST. (US Dollars)
Anyone visiting a university bookstore cannot help but notice how large, expensive and even pretentious many of the mathematics texts, and indeed texts in a variety of other disciplines, have become. At the University of Alberta, for example, the main text in one of the introductory calculus courses has almost 1100 pages and costs $71.75. Since there is also a 400-page solutions manual costing $27.65 and a 150-page laboratory manual costing $8.65, a student would have to spend over one hundred dollars for more than sixteen hundred pages of material for this one course which would be for many students only one of five courses being taken in a term. Whether the present-day student receives a fair return on such a large investment of money, reading time and the physical effort in handling the texts is of course another matter. (By contrast this reviewer’s first calculus text was an unassuming, even drab, little book of some four hundred pages costing less than four dollars although these were immediate postwar prices.)
The above thoughts have been prompted by an examination of Calculus, Kenneth Iverson’s latest monograph from Iverson Software Inc., which because of its size, scope and price stands in sharp contrast to the present-day texts just mentioned. Its content might best be judged by the following paragraph in the Preface:
The scope is broader than is usual in an introduction, embracing not only the differential and integral calculus, but also the difference calculus so useful in approximations, and the partial derivatives and the fractional calculus usually met only in advanced courses. Such breadth is achievable in small compass not only because of the adoption of informality, but also because of the executable notation employed. In particular, the array character of the notation makes possible an elementary treatment of partial derivatives in the manner used in tensor analysis.
The Preface concludes with the statement that
The text is paced for a reader familiar with polynomials, matrix products, linear functions, and other notions of elementary algebra; nevertheless, full definitions of such matters are also provided. The text, then, consists of the following eight chapters: Introduction, Differential calculus, Vector calculus, Difference calculus, Fractional calculus, Properties of functions, Interpretations & applications, and Analysis.
There is one page of references, and a one-page Language Summary given on the last page of the text rather than more conveniently on the outside back cover as is usual with most ISI publications.
The author has written Calculus in his characteristic terse style which is described very well by the following sentence beginning at the bottom of the first page of Chapter 1:
To avoid distractions from the central topic of the calculus, we will introduce the necessary notation with a minimum of comment, assuming the reader can grasp the meaning of new notation from context, from simple experiments on the computer, from the language summary in Appendix A, from the Introduction and Dictionary, or from the study of more elementary texts such as Arithmetic.
This is followed by two pages of simple examples illustrating some of the main features of J and a further two pages of exercises to reinforce these ideas. The remainder of the chapter consists of an overview of some of the topics to be developed in the remainder of the text, together with the introduction of a number of verbs for handling script files, constructing graphs, and defining some basic mathematical concepts such as circular and hyperbolic functions, matrix products, polynomials and complex numbers.
As would be expected the text makes considerable use of the recently introduced general derivative adverb D. where u D n is the nth derivative of u, and the scalar derivative adverb D=. ("0)(D.1) such that the expression f D x gives the derivative of the function f evaluated at x. Iverson also uses the integral or anti-derivative adverb I such that f I x gives the area under the graph of f from 0 to x. Whereas the adverb D is a primitive in Version 7 of J, the adverb I is defined in terms of a sequence of verbs which, for the parameters used for the examples and exercises, is equivalent to approximating the function by four groups of a polynomial of order ten. Persons wishing to obtain some idea of the style and presentation might read the discussion of these two adverbs in the first eight pages of Chapter 2. As an example of the use of D and I, consider the verb
N=. (%:@o.@2:) (%~) ^@-@-:@*:
for the probability density function of the standard normal distribution. Then N D 1 which has the value _0.241971 is the value of the first derivative of the probability density function evaluated one standard unit from the origin, and N I 1 which is 0.341345 is the area under the standard normal distribution from the origin out to one standard unit. Also the expression
5.1 8.5 8.5 ": (],.N,"0 (N I)) x
gives a formatted table of ordinates and areas of the standard normal distribution for any appropriate list x.
As another example, the verb
ChiSq=. ((]^(-:@<:@<:@[))*(^@(-@-:@]))) % ((2: ^ -:@[) * (!@(<:@-:@[)))
is the probability density function for the chi-square distribution, and the expression 10&ChiSq I 18.3, where 18.3 is the 5% critical value for the chi-square distribution with ten degrees of freedom, is 0.949891.
At the end of Chapter 4, Difference Calculus, Iverson introduces the verb
w=. _1&^ * (i. ! i. - ])@>:"0
for finding the weights required for calculating successive differences, and, for example, w 3 is equal to _1 3 _3 1, the weights for third differences. Then we may define the verb
Diff=. '' : '(>:x.) mp&(w x.) \ y.'
where mp=. +/ . * is the matrix product, which gives differences of an arbitrary order, and, for example, the expression 2 Diff ^&3 i. 8 is equal to 6 12 18 24 30 36, the second differences of the cubic function. Finally we may define the verb
which gives a three-column table with argument values in the first column, their squares in the second column, and the first differences of the squares in the third. (We might note that one of the programs demonstrated at a conference held at the University Mathematical Laboratory, Cambridge, in June 1949 to mark the inauguration of the EDSAC tabulated the positive integers from 1 to 32, and the squares and first differences. Persons interested in the evolution of programming languages since that time might compare the flow diagrams and program listings for the EDSAC program given on pages 395 to 401 of Randell (1975) with the expression DiffTable>:i.32 for the same calculation in J.)
The origin of some of the ideas in the present monograph may be found in the last chapter of the author’s earlier Arithmetic. Indeed, Iverson states in the Preface to Calculus that the subject is introduced in the same informal manner as arithmetic was in the earlier work. However, in this reviewer’s opinion, Calculus is a much more difficult, even formidable, work which may well deter many readers. Indeed, Iverson’s quite valid comment in the Preface that “introductory courses often succeed only in turning students away from mathematics, and from the many subjects in which the calculus plays a major role” may well apply to the present monograph. At the APL93 conference Iverson gave a tutorial “Teaching Calculus” in which he introduced the ideas and many of the details given in Calculus. The diskette distributed at that time contained a number of scripts used during the presentation. A careful study of these scripts could be a most helpful introduction to the present work. After a further study of Calculus, the reader could then, as the present reviewer has done, have the pleasure of approaching some old problems in a more general and satisfying manner than has been previously possible.
- Iverson, K. E., 1991. Arithmetic. Iverson Software Inc., Toronto.
- Iverson, K. E., 1993. J Introduction & Dictionary. Iverson Software Inc., Toronto.
- Randell, Brian, 1975. The Origins of Digital Computers. Selected Papers. Second edition. Springer-Verlag, New York.
- Standard Mathematical Tables. Nineteenth Edition, 1971. The Chemical Rubber Company, Cleveland, Ohio.
(webpage generated: 4 April 2006, 18:47) |
For extracts from reviews of Naive set theory see THIS LINK.
Since Halmos wrote a large number of exceptional texts, we have split this collection into two pieces. Below is the second half of our collection. For the first half of the collection of Halmos's books, see THIS LINK.
9. A Hilbert space problem book (1967), by Paul R Halmos.
Mathematical Reviews MR0208368 (34 #8178).
The book is divided into three parts: problems, hints and solutions. According to the instructions, if one is unable to solve a problem, even with the hint, he should, at least temporarily, grant it (if it is a statement) and proceed to another. He should be prepared, however, to use the as yet unproved statement in the hope of thereby getting some clues as to its solution. If he does solve a problem he should look at the hint and the solution anyway for perhaps some other variations on the theme. The rules are simple and the advantages of following them to the conscientious and diligent reader are surely obvious and incalculable. Nevertheless, there is the ever lurking temptation to combine each problem-hint-solution into a unified whole, to be read together and at one time. Although this procedure admittedly thwarts, at least partially, the aim of the book, it is certainly not without its own rewards. Thus, a researcher thereby may quickly find a compact presentation of a particular issue, often including some history, references, and enlightening side remarks.
Amer. Math. Monthly 91 (9) (1984), 592-594.
A recurring theme is the perturbation of an operator by a compact operator, beginning with the Fredholm alternative and a 1909 result of Weyl on the invariance of the limit points of the spectrum under such perturbations. ... Because of the diversity of topics that it includes, single operator theory is ideally suited to a book of the nature of Halmos's (we can imagine a Homotopy Theory Problem Book being less successful). The second edition contains 250 problems as against 199 in the first. It remains an excellent, compact survey of single operator theory, as well as a valuable source for Hilbert space techniques. The form, content, and style of the second edition deserve the same high praise as the first.
10.2. Review by: Philip Maher.
The Mathematical Gazette 73 (465) (1989), 259-260.
It might seem somewhat bizarre to be reviewing here a book that was first published in 1967 and one, moreover, whose subject matter is at the upper end of accessibility for the Gazette readership. Yet Halmos's 'A Hilbert space problem book' has never been reviewed in these pages before; the second edition (under review) differs quite a bit from the first; and, of course, anything by Halmos, however technical, promises to be lucid and entertaining. In fact, this book - with its well-nigh unparalleled form - is an object lesson in the communication of mathematics. For it does not follow that boring and so-often unilluminating format of definition, lemma, theorem, proof (a format which seems virtually de-rigeur for the exposition of advanced mathematics). Rather, this book consists of problems; some 250 of them (and their solutions) which cover much of the theory of (single) linear operators on Hilbert space. Of course, I am aware that other books have appeared that approach parts of mathematics in a problem-orientated (as distinct from, so to speak, a subject-orientated) way: one thinks of Burns's 'A pathway to number theory' for example. But Halmos's books is the first to do this at an advanced (rather than at an elementary) level.
Mathematical Reviews MR0517709 (80g:47036).
The book contains a very valuable amount of information on integral operators. The theory is richly illustrated by examples and counterexamples. Many open problems are discussed. A large number of them have been solved recently by Schep and W Schachermayer, and the solutions will appear soon in print. Also, the history of the subject is well presented. The style of the book is lucid, lively and entertaining, as we have come to expect from the senior author.
11.2. Review by: Adriaan Zaanen.
Bull. Amer. Math. Soc. (N.S.) 1 (6) (1979), 953-960.
The Halmos-Sunder book is written in a lively style, as was to be expected. It is a mine of information for any analyst interested in operators, in particular operators on L2spaces. The theory is illustrated by examples as well as by counterexamples; open problems are mentioned and sometimes analyzed. A list of bibliographical notes gives information about the history of the subject. The preface ends with the remark that the book contains only a part of a large subject, with only one of several approaches, and with explicit mention of only a few of the many challenging problems that are still open. I agree, but I hope that nevertheless it will be clear from my comments in this review that I believe the present contribution to operator theory by Halmos and Sunder is a valuable one.
The Mathematical Gazette 70 (453) (1986), 253-255.
P R Halmos has a well-deserved reputation as an expositor of mathematics. One of his favourite techniques is to excavate for the simplicity underlying layers of complexity, to extract it and to display it in a strikingly illuminating way. He also of course has made important contributions to mathematics per se, but on his own self-assessment his research achievements rank only fourth, behind writing, editing and teaching. These abilities as teacher of and doer of mathematics, combined with mild but significant eccentricity has made him a 'name' in mathematics, a name about whom the mathematical public will be sufficiently curious to guarantee interest in his autobiography. ... this is a fascinating addition to recent mathematical culture by one of its makers. The main message I absorbed from it was a set of conditions required for success in mathematics: talent, yes; single-mindedness, almost as obvious; sense of humour, essential when the going gets tough; and love, yes that is the right word - you must love mathematics, and that means all the ingredients, passion, pain and loyalty.
12.2. Review by: Henry Helson.
Mathematical Reviews MR0789980 (86m:01059).
This autobiography is a frank, personal, witty commentary on mathematicians and mathematics by one of the most influential, and observant, mathematicians of our time. It is much more about the profession of mathematics than about the personal life of its author. ... He makes two main points. The first is the importance of being literate. The ability to speak and write effectively, preferably in more than one language, is essential to effectiveness in all professional activity. And second, a real professional must work in all aspects of the job: research of course, but also teaching in several formats, exposition at several levels, refereeing and editing, departmental chores, and participation in meetings and conferences. His standard is high.
12.3. Review by: John A Dossey.
The Mathematics Teacher 79 (6) (1986), 481-482.
This autobiographical sketch details the professional aspects of the career of the distinguished American mathematician Paul Halmos. It gives the reader a delightfully candid view of the evolution of his career of research, teaching, travel, editing, and service from his secondary school days to the present. ... The book is exciting, witty, and well worth the time invested in its study. It communicates what it means to be a mathematician.
12.4. Review by: Gian-Carlo Rota.
Amer. Math. Monthly 94 (7) (1987), 700-702.
Every mathematician will rank other mathematicians in linear order according to their past accomplishments, while he rates himself on the promise of his future publications. Unlike most mathematicians, Halmos has taken the unusual step of printing the results of his lifelong ratings. By and large, he is fair to everyone he includes in his lists (from first rate (Hilbert) to fifth rate (almost everybody else)), except towards himself, to whom he is merciless (even in the choice of a title to the book: "I want to be a mathematician," as if there were any question in anybody's mind as to his professional qualifications). ... The leading thread of his exposition, what makes his narration entertaining (rather than just interesting), is mathematical gossip, which is freely allowed to unfold in accordance to its mysterious logic. The reader will be thankful for being spared the nauseating personal details that make most biographies into painful reading experiences ... Whatever does not relate to the world of mathematics is ruthlessly and justly left out (we hardly even learn whether he has a wife and kids).
Instead, the book is about his life as a mathematician among mathematicians. It shows us a little about how Halmos thinks about mathematics, about what interested and motivated him, and about how he interacted with others. It includes a lot of what might, somewhat uncharitably, be described as "gossip": stories and anecdotes about mathematics, mathematics departments, and mathematicians. In my experience, mathematicians love this sort of thing. Those of my colleagues who have read this book have enjoyed it. My students have liked it much less, partly because they aren't that interested in the world of mathematics, partly because they feel "turned off" by what they describe as Halmos's "arrogance." I think what bothers them is Halmos's bluntness about what counts and what does not count as significant in mathematics. That Halmos's harshness is mostly directed at his own work doesn't change my students' assessment. If all this famous guy can say, after trying for fifty years, is "I want to be a mathematician," they argue, then we students have no chance at all.
Would you look at some of the snapshots I have taken in the last 40-odd years? If you put a penny into a piggy bank every day for 45 years, you'll end up with something over 16,000 pennies. I have been a snapshot addict for more than 45 years, and I have averaged one snapshot a day. Over a third of the pictures so accumulated have to do with the mathematical world: they are pictures of mathematicians, their spouses, their brothers and sisters and other relatives, their offices, their dogs, and their carillon towers. The pictures were taken at the universities where I worked, and the places where I was a visitor (for a day or for a year), and, as you will see, many of them were taken over food and drink. That's rather natural, if you think about it. It is not easy, and often just not possible, to snap mathematicians at work (in a professional conversation, thinking, lecturing, reading) - it is much easier to catch them at tea or at dinner or in a bar. In any event, the result of my hobby was a collection of approximately 6000 "mathematical" pictures, and when it occurred to me to share them with the world I faced an extremely difficult problem of choice. ... The people included are not necessarily the greatest mathematicians or the best known. If I think a picture is striking, or interesting, or informative, or nostalgic, then it is here, even if the theorems its subject has proved are of less mathematical depth than those of a colleague whose office is two doors down the hall.
14.2. Review by: Arthur M Hobbs.
Mathematical Reviews MR0934204 (89f:01067).
This is a book of considerable merit in several different ways. Beginning with the most frivolous use, it is very nearly the perfect book for a mathematician's coffee table (it only lacks colour). Less trivially, with 604 photographs of (mostly) mathematicians and two photographs of the old and new buildings at Oberwolfach, each picture accompanied by an informative caption, the book is pleasant reading for any mathematician, for many mathematician's spouses, and for any other person interested in mathematics as it is lived. ... as a historical document, this book will be valuable in at least two ways. It gives a fascinating cross-section of mathematical life in the mid-twentieth century, and it provides considerable insight into the personality and interests of Professor Halmos himself.
Amer. Math. Monthly 99 (9) (1992), 888-890.
What makes a problem interesting? Its statement should be simple, not requiring excessive explanations, and the solution should be readily understandable by the intended audience. [In] Halmos's book ... his problems ... meet these criteria admirably. ... Halmos has won several writing awards, and the reader won't be disappointed in the prose with which he wraps the problems ...The book lives up to its title, which promises problems for both young and old. But is it the young or the old who are more likely to be impressed by the pretty and elegant elementary problems? I'm not sure. In any event, for the experienced mathematician or beginning graduate student seeking meatier fare the book contains several chapters with problems for the more mature reader.
15.2. Review by: Lionel Garrison.
The Mathematics Teacher 85 (7) (1992), 592.
Buy this book. In fact, buy several. Give them to your students and colleagues, and save one for when your first copy wears out. ... [this book], impels the reader to get out a pencil and start doing mathematics. Paul Halmos, one of the pre eminent mathematicians and teachers of our time, here shares his personal treasury of favourite problems. Perhaps a third of these 165 problems are at least comprehensible to able high school students who have studied calculus and probability and who enjoy being challenged. The remainder generally assume the standard undergraduate pro gram in analysis, topology, and abstract algebra.
15.3. Review by: G A Heuer.
Mathematical Reviews MR1143283 (92j:00009).
The 165 problems in this charming book are classified roughly by the mathematical subdiscipline into which they most naturally fit. ... While a good many of the problems are catchy enough to prick the interest of most readers without further help, most are preceded by the kind of skilful discussion for which the author is so well known, to make the problem more appealing and to help make clear why this is exactly the natural question to pose in this context. The title is apt: there is much here for the veteran professional mathematician; there is also a good bit for the budding mathematician who is still a high school pupil.
The Mathematical Gazette 81 (490) (1997), 168-170.
... it is the quality of Halmos' commentary that made [the book] such a treat to review and which means that I can recommend it without hesitation to all lecturers (who think they can teach linear algebra) and all sufficiently mature students (who think they have learnt it). His quiet emphasis on details, his penetrating insights into what makes an approach to a proof plausible, or the mode of construction of a counter-example clear, and his 'heart-on-sleeve' approach to problem-posing, in explaining why he cast the problems in the form he did, are matchless: you feel you are in the company of a master expositor, striding together through the world of linear algebra while he points out the flora, fauna, topography and the way ahead. [It is] a book bristling with lovely touches ...
16.2. Review by: Robert Messer.
Amer. Math. Monthly 105 (6) (1998), 577-579.
For students going on in mathematics, linear algebra serves as a transition to upper-level mathematics courses. In addition to learning the subject matter of linear algebra itself, these students must be fortified with a degree of mathematical maturity in working with axioms and definitions, basic proof techniques, and mathematical terminology and notation. These issues cannot be left to chance; they must be addressed explicitly to prepare students for courses such as abstract algebra and real analysis. As a textbook for a linear algebra course, Paul Halmos's 'Linear Algebra Problem Book' satisfies these criteria. ... The conversational style of writing in this book occasionally lapses into annoying chattiness. A definition can be guessable and an answer conjecturable. A corollary can be unsurprising or minute but enchanting. Within three sentences the zero linear functional has two symbols and goes from most trivial to most important and ends up uninteresting.
16.3. Review by: Jaroslav Zemánek.
Mathematical Reviews MR1310775 (96e:15001).
Understanding simple things such as basic linear algebra does not seem to be an easy task. Indeed, the author offers original insights illuminating the essence of the associative and distributive laws, and the underlying algebraic structures (groups, fields, vector spaces). The core of the book is, of course, the study of linear transformations on finite-dimensional spaces. The problems are intended for the beginner, but some of them may challenge even an expert. ... Needless to say, more emphasis on the history of the subject would be attractive in a book of this type. In brief, the reviewer regrets that the author chose not to go deeper into the subject; he shows a few trees, but makes little attempt to see the forest.
The Journal of Symbolic Logic 63 (4) (1998), 1604.
This slim book provides an introduction to logic with the goal, as suggested by the title, of demonstrating that logic can be profitably understood from an algebraic viewpoint. Instead of making the point by providing the reader an extensive treatment of algebraic logic, the book takes a modest concrete approach by limiting its consideration to that part of logic most familiar to a general audience, namely the propositional calculus and the monadic predicate calculus. The presentation reflects Halmos's pedagogical style. Topics are introduced first by logical considerations, then the ideas are abstracted, and finally they are placed in an appropriate general algebraic context. The book reads like an essay. This is not to say the presentation is not rigorous. It is. Details are presented when necessary to convey the ideas, but without overwhelming readers. The presentation is crisp and lucid yet informal. It is as if the principles of logic are being explained over a cup of coffee. The book is directed towards a general (mathematically literate) audience with an interest in modern logic. Nevertheless, the prerequisite of "a working knowledge of the basic mathematical notions that are studied in a first course in abstract algebra" should be taken seriously. It is not a textbook (in the usual sense) even though it is based on notes from a course in logic by Paul Halmos.
17.3. Review by: Graham Hoare.
The Mathematical Gazette 84 (499) (2000), 172-173.
[The book] is intended 'to show that logic can (and perhaps should) be viewed from an algebraic perspective .... Moreover, the connection between the principal theorems of the subject and well-known theorems in algebra become clearer.' Readers anticipating arguments based on truth tables or diagrams of switching circuits will be disappointed. In compensation they will be entertained by a rich array of algebraic concepts such as prime and maximal ideals, filters, homomorphisms, equivalence classes, kernels, quotient algebras and duality, all in the service of logic. As the authors state, 'propositional logic and monadic predicate calculus - predicate logic with a single quantifier, are the principal topics treated'. ... the whole will serve as a neat, succinct, introduction to logic particularly for readers very much at home with algebraic concepts.
17.4. Review by: Myra R Lipman.
The Mathematics Teacher 92 (4) (1999), 371.
The authors of this book have targeted a wide-ranging audience; however, the book cannot be all things to all people. ... Because of the scope and depth of material, this book would be most useful as a classroom reference or supplement. If the book included many more examples and some exercises, it would be outstanding and certainly more helpful to students. Viewing logic from an algebraic perspective is an intriguing concept, and the authors succeed in giving a concise overview of relatively complex material in an intelligible manner.
17.5. Review by: Natasha Dobrinen.
The Bulletin of Symbolic Logic 16 (2) (2010), 281-282.
This is an excellent and much-needed comprehensive undergraduate textbook on Boolean algebras. It contains a complete and thorough introduction to the fundamental theory of Boolean algebras. Aimed at undergraduate mathematics students, the book is, in the first authors words, "a substantially revised version of Paul Halmos' Lectures on Boolean algebras." It certainly achieves its stated goal of "steering a middle course between the elementary arithmetic aspects of the subject" and "the deeper mathematical aspects of the theory" of Boolean algebras.
17.6. Review by: Marcel Guillaume.
Mathematical Reviews MR1612588 (99m:03001).
... this booklet is a gem, whose reading the reviewer ... highly recommends, fundamentally because it gives a better designed, direct and concise overview of logic, going beyond the topics explicitly treated, which are in fact reduced just to the minimum needed in order to explain and introduce the key notions, and to explain and prove fundamental theorems in the special cases considered. As to the form, the style is vivid and clear, using simple words, and free of long and complex technicalities. The text is rich in brief comments explaining the ideas behind the reasoning and calculations, and frequently refers to simple examples and to the basic notions of universal algebra. |
A charge of negative 30 microcoulombs is distributed uniformly over the surface of a spherical volume of radius 10.0 centimeters. Determine the electric field due to this charge at a distance of 2.0 centimeters from the center of the sphere. Determine the electric field due to this charge at a distance of 5.0 centimeters from the center of the sphere. Determine the electric field due to this charge at a distance of 20.0 centimeters from the center of the sphere.
In this situation, we have a given amount of charge that is distributed uniformly over the surface of a sphere. And the sphere’s radius is given to us as well. We want to solve for the electric field that’s caused by this uniformly distributed charge at various distances from the center of the sphere that they rest on. Since the three questions that were asked are identical except for the distance from the center of the sphere at which we’re to solve for the field, let’s keep a tally of these three different distances off to the side and start with a sketch of this spherical object.
Here are these three distances from the center of the sphere. We’ve called them 𝑟 one, 𝑟 two, and 𝑟 three as well as capital 𝑅, which is the radius of the sphere itself 10.0 centimeters. In addition to all this, there’s a net negative charge uniformly distributed over the surface of the sphere. And we’re given that charge amount: negative 30 microcoulombs. We can label that charge capital 𝑄. For our first question, we want to solve for the electric field created by this charge 𝑄 at a point 𝑟 one from the center of our sphere.
Knowing that 𝑟 one is 2.0 centimeters, we can imagine it looks something like this within the sphere’s volume. When we consider the electric field at 𝑟 one created by charge 𝑄, which is distributed uniformly over the surface, we can start to imagine how the electric field lines from each individual charge on the surface interact with the field lines of the other charges. For example, let’s consider the electric field lines that come to this particular charged we’ve marked out on the surface. It’s a negative charge, so we know the field lines will come toward it rather than away from it. And these field lines might look something like this.
And looking at them, we see that they do indeed have an effect at the point we’ve labeled 𝑟 one. In other words, if it was just this charge on the surface of our spherical volume, then the electric field at this point would be nonzero. The field affects that point, but let’s keep looking at the other charges on the surface. In particular, let’s consider the charge on the spherical surface that opposite the first one we chose. We can draw the electric field lines for this charge as well. They would look something like this, identical to the field lines from the first charge we picked. In fact, if we went around this circle, one by one drawing in the electric field lines from each of the charges on the sphere, then we would see all these electric field lines overlapping and in fact cancelling one another out.
As an example of that, if we consider the point at the very center of our sphere, then the electric field created by the first charge we’ve drawn field lines for and the field created by the second charge we’ve drawn field lines for oppose each other at that point perfectly. So just due to the effects of these two charges opposite one another on our sphere, the electric field at this point would be zero. And when we consider not just one pair of charges opposite one another on the circle but all pairs of opposite charges, its effect is reinforced. Here is the net effect of all this. It’s a surprising result, but it has to do with the fact that our charge is uniformly distributed over this surface.
The overall effect is that inside the sphere the electric field lines from the various charges on the sphere’s surface perfectly cancel one another out at every location, not just at the center point, but everywhere within the sphere the electric field is zero. And again this has to do with the electric field lines of this uniformly distributed charge working against one another to cancel each other out. So what does all this mean for the electric field at 𝑟 one? Well since 𝑟 one is inside the sphere, that means that the electric field there is zero. And in fact, we can say even more. The electric field magnitude at 𝑟 two, which is a distance of 5.0 centimeters, from our 10.0 centimeters radius sphere is also inside the sphere and therefore also equal to zero.
As we’ve seen, because of the way the electric field lines of the charges on the sphere surface work against one another, the electric field at any point within this volume is zero. But take a look at this! 𝑟 three our final radius value is 20.0 centimeters, which is indeed outside our spherical object. We could say that 𝑟 three is a point some distance such as this away from our sphere’s surface. And what we’d like to know is what’s the electric field there at that point. To figure this out, we’ll once again use the fact that the charge on the spherical volume is distributed uniformly. That comes in handy once again! Because that uniform distribution, that means we can simulate the location of all of this charge 𝑄 at a location at the center of the sphere.
So when we’re calculating the electric field at a point somewhere outside the sphere, we can effectively concentrate all of the charge that the sphere has on its surface at a point in its center for the purposes of this calculation. So we imagine the full amount of charge 𝑄, negative 30 microcoulombs, moves from the surface to the center of this volume. And again this isn’t happening physically, but from a mathematical perspective this will help our calculation. Now we effectively have a point charge 𝑄. And a known distance away, we want to solve for the electric field created by this point charge. That scenario may sound familiar. And we can recall that the electric field vector created by a point charge 𝑄 is equal to that charge 𝑄 multiplied by Coulomb’s constant 𝑘 all divided by the distance between the charge in the point at which we’re solving for the field.
And since this is a vector were calculating, it has a direction either away from or towards the point charge we’re considering. In our case then, we can say that the electric field at 𝑟 three is equal to Coulomb’s constant times our overall charge 𝑄 divided by 𝑟 sub three squared in the radial direction, where that direction is given along a line between our point charge and the point we’re calculating the field. Looking at this equation, we already are given 𝑟 sub three, and we know 𝑄. The only thing we still need to know is Coulomb’s constant 𝑘. When we look up or recall that constant, we see it’s equal to 8.99 times 10 to the ninth farads per meter.
So we enter in this value for 𝑘, we write our charge 𝑄 as negative 30 times 10 to the negative sixth coulombs, and we rewrite our radial distance of 20.0 centimeters as 0.20 meters. When we calculate all this, we find a result to two significant figures of negative 6.7 times 10 to the sixth in the 𝑟-hat direction newtons per coulomb. That’s the electric field at point 𝑟 three. And note that this field is a vector with both magnitude and radial direction. |
Highlight wrong notes errors Highlight wrong sums. Killer Sudoku Pro is also sometimes known as Killer Pro or KenDoku.
Fill in the Missing Grid.
How to play killer sudoku without numbers. Each row column and nonet can only contain unique numbers. This is the standard rule for Sudoku. Your goal is the same as in regular sudoku.
Its time to delve into the section you have been waiting for. How to play Killer Sudoku Killer Sudoku is a mix of Sudoku and Kakuro. While solving the sudoku puzzle you can only use each number one time in the square column and row.
This uses the fact that every row column and block must contain each of the numbers 1 to 9 once. The numbers in this column will add up to 45. – Each Sudoku has only one true solution.
Once you put in this grid it is sooo much easier. How to play Killer Sudoku online. Kakuro operates in a similar way.
If ordinary Sudoku puzzles are too easy for you you will enjoy learning how to do a killer sudoku. You couldnt have two 4s appearing in the same row column or nonet. This easier sudoku is from killersudokuonline.
Highlight current row and column. Auto load pencil notes. In killer sudoku no numbers will be filled in at the start.
So like Killer Sudoku Killer Sudoku Pro puzzles includes all of the rules of regular Sudoku but adds in dashed-line cages which must result in given values when a particular operation is applied. The sum of the numbers inside of a cage must equal the clue number. – do not repeat the numbers in the same row or column.
Make sure the sum of numbers in each cage is equal to the number in the upper left corner of the cage. Enter the numbers 1 through 9 in. With our Sudoku free puzzles app you can not only enjoy killer sudoku games anytime anywhere but also learn Sudoku techniques from it.
– do not repeat the numbers in the red blocks. This will help you to quickly note all possibilities for any given cage when solving a Killer Sudoku puzzle. Take the first column of the Killer Sudoku shown in Figure 1.
But dont try to force anything Sudoku rewards patience insights and recognition of patterns not blind luck or guessing. – do not repeat the numbers in groups 3×3. You can try this sensational puzzle from Christoph Seeliger herehttpscracking-the-crypticwebappsudoku36r8R9FNnN TRY OUR SANDWICH SUDOKU APP AppStore.
Fill every row column and 3×3 region with the numbers 1-9 once. A Cage with two Cells and a clue of 3 must contain the numbers 2 and 1. The name Killer Sudoku arises because of the wicked twist on the standard Sudoku puzzle since you must not only place each of the numbers 1 to 9 or 1 to the size of the puzzle into each of the rows columns and bold-lined 3×3 or other size boxes but you must also place the numbers into each dashed-line cage so that they add up to its given total – and without repeating a digit in a dashed-line cage.
The sum of these numbers must give the indicated value. Below are all possible combinations of numbers that can be placed in a Killer Sudoku cage given the cage size and the cage total. Rule of 45 Each sudoku region ie row column or nonet contains the digits one through nine.
Pay attention to the cages groups of cells indicated by dotted lines. Fill all rows columns and 3×3 blocks with numbers 1-9 exactly like in sudoku classic. If you dont know what number to put in a certain space keep scanning the other areas of the grid until you seen an opportunity to place a number.
Therefore the total of all numbers in one row column or block will always be 45. This will make it easier for you to place numbers and solve the puzzle. For a more comprehensive guide please read our How To Play Killer Sudoku The sum addition of the numbers entered into each respective shaded area must be equal to the clue in the areas top-left corner and no number may be used in the same shaded area more than once.
The difference is how you arrive at those numbers. Gray out completed numbers. Thus if all the digits but one appear in a row the missing digit must appear in the empty cell.
Magic Sudoku Killer Sudoku Kakuro Pseudoku and several others. Some other variations include. You see a Sudoku puzzle is missing a grid.
Sudoku is a game of placing numbers 1-9 in empty spaces within the same row column or square however many beginning Sudoku players make the mistake of only focusing on rows horizontal or columns vertical. In the former case each region must contain all the digits one to nine. This rule can be applied to sudoku regions ie row column or nonet or to a cage.
An essential Killer Sudoku solving technique is the 45 rule. The objective of Sudoku is to fill a 9×9 grid made of squares shown above circled in blue so that each row each column and each full 9×9 square use the numbers 1-9. Complete all positions with 1-9 following the following rules.
How to play Killer Sudoku. This variation of Sudoku has the same grid but uses words rather than letters to fill the cells without duplicating letters. How to solve Killer Sudoku Pro puzzles.
Like Sudoku you must place each of the numbers 1 to 9 or 1 to the size of the puzzle into. How to play Sudoku for beginners and experts. |
Some myths concerning statistical hypothesis testing
mats_trash at hotmail.com
Thu Nov 7 15:28:21 EST 2002
> The first is not an assumption,
> it is a fact. A p-value expresses a conditional probability. That is, a
> p-value expresses the probability of obtaining the observation in question
> GIVEN THAT THE NULL HYPOTHESIS IS TRUE. Since it is not known whether or not
> the null hypothesis is true (at least ostensibly, but see below), the notion
> that a small p-value means the finding is highly likely to be replicated is
> clearly false.
Rubbish, its not just wrong - it doesn't even make sense. Lets say we
have two groups of patients with the same disease, one is given
treatment A, one B. The initial assumption is that there is no
difference in the efficacy of the treatments (null hypothesis). We
set up a trial to see if either patient group does any better. As it
turns out B do better than A. We apply a suitable test and get a
p<0.05. Therefore there is only a 5% chance that what was found true
for the selected patients is not generally true of all patients with
the disease i.e. that in fact, for the population as a whole A=B. Now
you are saying that you cannot know the null hypothesis is true
beforehand - but thats the whole point of the test! Where does
arguing that you can't know whether the null is right or wrong
beforehand get you?
> The only way to demonstrate the reliability of data is to
> replicate the finding.
where in standard statistics does it claim otherwise? The p value is
only a probability.
> Marc does not say what follows in his paper, but this
> misconception has produced a state of affairs in which a great deal of
> importance is attached to findings before it is clear that the finding is
> reliable. The result is that there is, all things being equal, a great deal
> of discrepant results in the various scientific literatures that rely on
> statistical significance testing. In contrast, for sciences in which the
> reliability is demonstrated in each subject (usually repeatedly), or
> "subject" if the preparation is not a whole animal, there is far less
> failure to replicate (this is because such data are published only when
> there have been numerous demonstrations of reliability within and across
> subjects). For an example of how this is done, you may examine my paper: The
> Effects of Acutely Administered Cocaine on Responding Maintained by a
> Progressive-ratio Schedule of Food Presentation, which is in press in
> Behavioural Pharmacology. Or, you may examine virtually any paper in the
> Journal of the Experimental Analysis of Behavior. Or you may obtain a copy
> of Sidman's Tactics of Scientific Research, or even Claude Bernard's classic
doh! you are doing the very same as the people you chastise! by
repeating the experiments you are increasing your n, such that if
there is a true difference it should become apparent. Just becuase
you don't apply a t-test and get a p value doesn't mean you aren't
doing the same thing. If six animals respond to cocaine and six don't
to placebo, the implicit message is that you'll get low p value. When
datasets of tens of thousands are involved you need tools to
summarise. What would you say if mice 1 3 and 5 responded to cocaine
and no others did? would you say cocaine does have an effect? how do
you proceed to argue your case and produce a conclusive result?
> Mat: The second point - is the argument that the procedures are incorrect
> (i.e. the algorithm) or that the underlying basic assumptions are
> incorrect (e.g. normal distribution). If it is the former, then again
> its rubbish, if its the latter then this argument is well known and he
> presents nothing new.
> GS: Wrong. Remember that a p-value represents the probability that one will
> observe certain data given that the null hypothesis is true. If one asserts
> that the p-value is really the probability that the null hypothesis is true
> given the data (which is the same thing as saying it represents the
> probability that the observed data are "due to chance") is to "reverse the
> conditionality." As Marc says, this is tantamount to saying that the
> probability of rain given that it is cloudy is the same as the probability
> that it is cloudy given that it is raining. Think about it when your blood
> pressure returns to normal.
No, changing the assertion is not allowed as any decent statistician
will tell you. the p value is categorically not a probability that
any hypothesis, null or otherwise, is true. You don't actually
understand this do you? In the population under investigation, either
the null or proposed hypothesis is true. What the stat test tells you
is the likelyhood of you again finding a significant difference if you
took another sample of the population and did the trial again. It
does not tell you what is true or not true of the whole population.
The conclusions drawn are tentative inferences based on the stats.
The arbitrary limit is set at 95% and above this we claim that we have
good enough evidence to act as though the null hypothesis is not true
of the general population - it still may be true, we will never know.
All we can do is act according to the best available evidence. Its
modern science, and the approach has improved healthcare dramatically.
> GS: Now it is my turn to use the term "rubbish" (here in the States, we
> usually call it "garbage," but BS is probably more appropriate). If you
> "obtain significance" you write a paper and submit it. If you do not, you
> throw the data in the garbage (sounds pretty damn "categorical" to me), or
> you just "increase the N" until you have found your "truth" (the fact that
> all you have to do usually to reject the null hypothesis is simply add more
> subjects should tell you something).
Just increase your n?! This is laughable. If you had been taught any
stats you would understand that if there is only a very slight
difference in the efficacy of two drugs say, then a large sample size
will be needed so that the difference become apparent. Lets say for
example than drug A makes 50% of people better, while drug B makes 52%
of people better. Would you expect that if you chose ten people on
each drug you'd observe the difference? What about a 100? Would you
be confident in being treated by a doctor who based his treatment on
the obervation the 20 other people he'd seen with your disease in his
> You know this is true. But, in any
> event, you are not on the right track. The point is that the strawman null
> hypothesis is almost always not true. Marc, quoting Kraemer (ref. on
> request), writes, "something nonrandom is almost always going on, and it
> seems a trivial exercise to redemonstrate that fact."
When was it first demonstrated? Proof of this comes from where? Prove
to me that there is any sort of difference between anything you can
think of. Observed all of them?
> At the end of this
> section Branch concludes, "Perhaps it is not so bad that significance tests
> do not estimate the truth of the null hypothesis, because we already know
> that it is false.
Any null hypothesis is false? Prozac is no good for heart attacks,
Drug A is no better than drug B - false!? which one is better?
If the origin of you and whoever elses dissatisfaction with p values
and the like lies in the fact the trials etc. often contradict, even
though they all publish a 'significant' p value, then aim your
contempt at the design of the trial, not the stats.
More information about the Neur-sci |
[07:12:03] GM: Okay, to recap, you are locked in combat at the moment, the Initiative order is. Sarah Komillia Suki Gamjin Lurana Jaron Enemy Aylanea (sp) Enemy Alleria Enemy Enemy Enemy, it will be Lurana's turn when we start
[07:12:27] Jess: (uhhh i dont remember combat…)
[07:13:31] 2Lt. Madresa: (( you spelled Aylanea right ))
[07:13:31] GM: Status report: Jaron -340 MD main body - 16 SRM: Ay -140 MD main body -12 SRM: Sarah -12 SRM: Gamjin -15 SRM: Lurana -90 MD Main body
[07:17:33] Lt. Jaron: .
[07:17:59] GM: kk, so, we all clear where we are and the condition we are in?
[07:18:13] Lt. Jaron: (yea SoL)
[07:18:29] James: (yep)
[07:19:14] GM: Jaron, you want to fill Lurana in on the last action 'she' took, and the result of it?
[07:20:03] GM: B = Silverback
[07:20:16] GM: I has enough S's up there already
[07:20:38] Jess: (and m?)
[07:20:59] GM: Marines in Cyclones
[07:21:21] GM: Alleria is still connected to Lu at this point, as the Legios is in hover…
[07:21:44] GM: Suki has split from Sarah and is in Battloid mode using the dune like a trenchline.
[07:22:12] GM: So Jaron…
[07:22:37] Lt. Jaron: (me & madresa are behind our line, heading for another run; maybe)
[07:22:54] GM: right, and Ima let YOU tell Luranan what happened with her last action hee hee hee
[07:23:15] Lt. Jaron: (oh yea forgot about that)
[07:23:21] GM: heheh I didn't
[07:24:08] Lt. Jaron: (done)
[07:25:19] GM: and you tell her the result?
[07:25:27] GM: that we all know but our characters do not
[07:25:34] GM: gotta love that 4th wall
[07:25:38] Lt. Jaron: (cliffhanger, we dont know whats going to happen)
[07:26:06] GM: right oh, and off we go
[07:26:09] GM: dt
[07:26:44] Narrator: Far behind you, 200 miles to your west a pair of Phalanx Destroids step out from their armored bunkers and into the cool night air.
[07:27:02] Narrator: “Fire mission received.” The lead pilot noted as grid coordinates flashed across his MFD. “Plot and track look good, missiles armed.” His wingman completed a pre fire sequence of his own “Longbow 2 good to go.”
The pair of Phalanx settled as the massive bay doors of their MDS-H-22 swung open to expose the Long Range warheads.
“Longbow strike, you are clear to fire!” Came the battery commander’s voice over the net.
Night turned into day for a split second as a massive blast of flame leapt from the rear vent nozzle of each ‘Derringer’. First one missile from each pod, followed by a second, the trailing missiles thruster flare illuminating the smoke trail of the missile preceding it creating a mile long iridescent plume into the sky. “Missiles away!” Seconds later Longbow 2 fired his missiles. “Longbow 2, missiles away!”
Eight missile streaked into the air their solid rocket burning bright, powering them towards the designated location. Now, miles away from the launch vehicle guidance went internal and the missiles armed their mixed plasma and high explosive warheads. Breaking past mach 2 the missiles began to nose over and begin their terminal dive.
Below upon sandy dunes UEEF mecha and Haydonite forces exchanged fire, the missiles programmed guidance continued one however, despite the fact that the UEEF mecha were getting larger in the sight picture, detail could be made out now, Silverbacks, an Alpha, and the distinct stocky frame of a Beta.
On the ground the Haydonite air defense units detect the incoming missiles. “Missiles incoming! Target and Destroy!”
The three remaining Haydonite ADA unit swivel their cannons skyward and open fire leveling masses and accurate ground fire at the on rushing barrage! First one explosion, then two more! The missiles move to evade the onrushing laser blasts but the Haydonite fire is just too precise. A pair of missile manage to survive the salvo, and only then do the Haydonites realize the irony of their actions.
Striking dead center within the UEEF lines, the two long range missiles detonate, peppering everything within 80 feet with white hot fragments. One of the Silverbacks’ is shredded and collapses while the two Cyclone dismounts nearby are tossed like rag dolls. Suki’s Alpha and the Sarah’s nearby Beta also take damage as shrapnel and molten metal impact their right sides!
[07:29:29] Jess: (hence why lu dosent call air strikes..)
[07:29:53] GM: Well technically, she is the air stike lol
[07:30:06] Jess: (sence when….)
[07:30:26] GM: sinc eyou are in an airplane….and..you shoot things on the ground…
[07:30:40] Jess: (i ment artiliary strikes my bad)
[07:31:11] GM: Sarah and Suki your mecha takes 90 MD main Body, the Silverback and two cyc marines take 160 md
[07:32:10] 2Lt. Madresa: (( thought Sarah and Suki were separated? ))
[07:32:21] GM: Lurana your turn (there are at least 3 Haydonite ADA units behind the dunes (where the dragons are) a lone Behemoth and a Revanant, last count put Haydonite dismounts (reavers) in the number of 30 or so)
[07:32:25] Jess: (they are but there close to gether
[07:32:37] GM: those missile have a heck of a blast radius
[07:33:24] 2Lt. Madresa: (( kay ))
[07:33:57] UEEF: Hitman team! Call corrections on firemission! Second slavo ready! Tube artillery moving into position as we speak, ETA about 10 minutes! How copy over!?
[07:33:59] 2 LT Komillia: ((usually around 100 feet))
[07:34:57] Jess tries to call the correction on artiliary perfering to call it off afterwht just happend (wtf do i roll for that…)
[07:35:03] Jess: (sence i wasnt here last time)
[07:35:12] Lt. Fallnya tries to call the correction on artiliary perfering to call it off afterwht just happend (wtf do i roll for that…)
[07:35:13] Lt. Fallnya: (sence i wasnt here last time)
[07:35:16] GM: Note: I fully expect thngs to get lost in translation and maybe not understood, flow with it, chalk it up to Fog of War.
[07:35:29] GM: Navigation
[07:35:40] Lt. Fallnya: (fog of war dosent hardly exist in technical battles ….)
[07:35:46] GM: and an RSI couldn't hurt, thats what you use last time right Jaron?
[07:35:59] GM: When your enemy is shadow cloaked, it does…
[07:35:59] Lt. Jaron: (just nav)
[07:36:03] GM: kk
[07:36:05] GM: then just nav
[07:36:20] Lt. Fallnya: [1d100] => = (21)
[07:36:33] Lt. Fallnya: 61% normal no idea on penalty
[07:36:42] GM: stright dice roll
[07:36:46] GM: you succeed
[07:36:48] Lt. Fallnya: umm
[07:36:50] Lt. Fallnya: she has nav
[07:38:00] Lt. Fallnya after the call is made and new coordinates are set she opens up with her guns and lasers hanging low trying to strafe the troops
[07:38:25] Lt. Fallnya: (terrain hugging)
[07:38:37] Lt. Fallnya: (or AA emplacedments
[07:38:45] Lt. Fallnya: (Wich ever is bigger
[07:39:16] GM: You can make out a pair of enemy on the far side of the far dune, your altitude, slight as it is allows you this, of the larger units only the top half of the Behemoth is visable
[07:39:24] GM: the two smaller units are reavers.
[07:39:51] Lt. Fallnya goes for the behemoth
[07:39:58] GM: Roll it and call it
[07:40:05] Lt. Fallnya: [1d20+11] => [19,11] = (30)
[07:40:21] Lt. Fallnya: (bleh *kicks dice* you shoulda been a 20!)
[07:40:53] 2Lt. Madresa: (( rofl ))
[07:41:03] GM: I want you to ROLL for how many missile you fired, since it would not be fair to say a high number after a good roll
[07:41:11] Lt. Fallnya: (NOT FIREING MISSILES
[07:41:17] GM: oh well in that case
[07:41:31] GM: and chill Jess,
[07:41:39] Lt. Fallnya: (points up i said guns and lasers)
[07:41:46] GM: you said strafe
[07:41:49] GM: thats vague
[07:42:00] Lt. Fallnya: guns and lasers hanging low trying to strafe the troops
[07:42:12] GM: roll damage
[07:42:27] GM: and yes, you are right
[07:42:44] GM: While Lu is rolling her considerable damage, Jaron, your action
[07:43:17] Lt. Fallnya: (ummm i forgot the damage on EU-15's sence you havent updated that on lu's sheet)
[07:43:29] GM: look at the House Rules
[07:43:50] GM: entry 15
[07:44:12] Lt. Jaron turns about and fires a volley of four at an ADA unit ([1d20+5] => [10,5] = (15))
[07:44:46] GM: [1d20+5] => [7,5] = (12)
[07:44:51] GM: Roll damage
[07:45:13] Lt. Jaron: [(2d6*10)*4] => 240
[07:45:29] GM: The ADA unit is obliterated
[07:46:15] Lt. Fallnya: [4d8x2] => 4d8x2 Nose lasers, [(3d4x10)+30*2] => (3d4x10)+30*2 dual EU-15's, [4d4*10+30] => 110 dual guns from beta
[07:46:27] Lt. Fallnya: err
[07:46:30] Lt. Fallnya: 2 of them didnt roll
[07:46:38] Lt. Fallnya: [4d8*2] => 36
[07:47:02] Lt. Fallnya: [(3d4*10)+30*2] => 160 dual EU-15's
[07:47:17] Lt. Fallnya: 306 looks like
[07:47:41] GM: Lurana, your rounds impact the side of the Behemoth and all rounds are delfected by an energy barrier!
[07:48:13] Lt. Fallnya: (…. ok wouldent we see the dam barrier…)
[07:48:20] GM: The Behemoth lifts slghtly into the air, about 15 or so
[07:48:22] GM: nope
[07:48:28] GM: not until it's hit
[07:48:45] Lt. Fallnya: (ok funny all other barriers in robotech were visible… )
[07:49:22] GM: Thats one barrier, the bog one and the PPB, and if you pay attention to the Shadow chronicles when the uEEF started in on the Haydonite ships the barrier was only visable upon impact….
[07:49:42] GM: moving on
[07:50:41] 2 LT Komillia: (afk)
[07:51:02] GM: The behemoth moves backwards as a few more dismounts exit the rear of the vehcile.
[07:51:13] GM: Ay, your turn
[07:52:19] 2Lt. Madresa watches for any enemies visible from the cover position most moved to, slipping up in order to get a better vantage to see if she can find an ADA unit.
[07:52:55] GM: Slipping up, so, she is taking a postion behind the dune?
[07:52:59] 2 LT Komillia: (back)
[07:53:03] GM: wb
[07:53:29] 2Lt. Madresa: (( slipping out from behind the ridge everyone had hid behind, or most had. ))
[07:53:37] GM: show me
[07:55:02] 2Lt. Madresa: (( *checks* ))
[07:55:10] Lt. Fallnya: (test test am i still here)
[07:55:14] GM: yes
[07:55:42] 2Lt. Madresa: (( Ay went to the dune last time, she isn't up with Jaron, for one… and… ))
[07:56:03] GM: just plop a dot on the map where you want to be
[07:56:12] 2Lt. Madresa: probably moving to there
[07:56:17] GM: so long as that dot isn't in a hnice comfy bed in Pioint R
[07:56:19] 2Lt. Madresa: (( on the ground. ))
[07:56:26] 2Lt. Madresa: (( rofl ))
[07:56:32] GM: umm, kk, hold on let me clarify
[07:56:56] GM: Dunes are big
[07:58:26] 2Lt. Madresa: (( up by where Gamjin and Komillia pulled back to, basically. ))
[07:58:40] 2Lt. Madresa: (( somewhere there, yeah ))
[07:59:18] GM: kk
[07:59:21] GM: go ahead
[07:59:42] 2Lt. Madresa: (( there anything for her ot see, there? Or no? ))
[07:59:49] 2Lt. Madresa: (( she's looking for any hostiles. ))
[08:00:10] GM: whoever is removing the lines…stop it
[08:00:52] GM: okay so you are sneak sneaking up to Gam and Komillia
[08:01:20] 2Lt. Madresa: (( since I'm sneaking, I figure I won't make it there quite in one round. ))
[08:01:28] 2Lt. Madresa: (( or one action rather ))
[08:02:00] GM: nod
[08:02:05] GM: you are in transit
[08:02:29] GM: Jaron, one of the ADA units fires on you as the unit next to it explodes.
[08:02:33] GM: [1d20+5] => [17,5] = (22)
[08:03:04] Lt. Jaron tries to dodge out of the way (dodge [1d20+11] => [16,11] = (27))
[08:03:29] GM: You are able to nimbly amnuver your all but crippled mecha out of harms way
[08:03:32] GM: Alleria
[08:03:49] Lt. Alleria would try to lock missiles on an ADA unit if i can see it
[08:04:25] GM: Roll an RSI
[08:04:55] Lt. Alleria: [1d100] => = (58) vs 50%
[08:05:27] GM: There is a lot of interfearance, but you think you might have something you saw some outgoing at Jaron….might be a place to start
[08:05:55] Lt. Alleria kicks her sensors and fires the missiles blind at there general direction then! [1d20+3] => [15,3] = (18) packet of 2 SRM's from her MM16's
[08:06:12] GM: Blind = stright dice roll no +3
[08:06:24] GM: [1d20+5] => [8,5] = (13)
[08:06:26] Lt. Alleria: (oh fine just 15)
[08:06:34] GM: Regardless, the missile strike true!
[08:06:43] GM: missles rather
[08:06:46] GM: rolldamage
[08:07:03] Lt. Alleria: [4d6*10] => 220
[08:07:17] Lt. Alleria: (heap)
[08:07:42] GM: You do massive damage to the enemy, but the haydonite is still inthe fight
[08:07:51] GM: Revanent
[08:08:16] Haydonite: That Legios hovers looks tasty, I'll shoot at that!
[08:08:24] Haydonite: [1d20+5] => [10,5] = (15)
[08:08:30] Lt. Alleria: (were not hovering btw)
[08:08:36] Haydonite: hovers = hovering
[08:08:50] Lt. Fallnya: (i stated were moving)
[08:08:53] Lt. Fallnya: (in my attack)
[08:09:12] Haydonite: hovering = moving
[08:09:30] GM: roll dodge Lu, since you are the pilot
[08:09:36] GM: no AD, since you are connected
[08:10:18] Lt. Fallnya does a simple twist spin fireing full booster to roll through the shots [1d20+15] => [2,15] = (17) + any bonus from legios form
[08:10:30] GM: 2 auto fail
[08:10:31] Lt. Fallnya: (ok now my dice hate me)
[08:10:58] Lt. Jaron tries to remember anything about forcefields as he prepares to fly pass (intelligence [1d100] => = (53) vs 56])
[08:11:16] GM: [4d4*10] => 110
[08:12:01] Lt. Fallnya: (wich main body gets hit btw sence even in legios alpha and beta's main body stats are seperate
[08:12:46] GM: Once a forcefield is brought down, it has a lengthy recycle time, this of course is the way UEEF shields work, some localized barriers like the PPB can recycle much faster then a omni barrier, which, this Behemoth seems to have as the barrier seems to cover the enierty of the hull.
[08:13:09] GM: it all goes to the largest portion,
[08:13:42] Lt. Fallnya: (that would be the beta if im not mistaken)
[08:13:51] Lt. Jaron relays the info to the others
[08:13:56] UEEF: [4d20+5] => [11,15,3,15,5] = (49)
[08:14:00] UEEF: [1d20+5] => [7,5] = (12)
[08:14:08] GM: correct
[08:14:29] GM: Sarah
[08:18:41] CWO Sarah tries to watch after landing her Beta in Battloid, and recovering from the artillery strike. She swears very slightly from the blasts, shaking her head, and trying to watch for where the others were firing at, to see if she can get the position on sensors for the enemy ADA units, hoping to lock missiles into the sensors for line of sight.( [1d100] => = (26) RSI vs 60%)
[08:19:26] CWO Sarah: (( for lack of line of sight, rather ))
[08:19:49] GM: It's difficult, based on the terrain blocking your LOS, but with the UEEF tech such as it is, you are able to get a minor feed from Jarons telemtrey as he passes ovr the enemy troop formation
[08:19:58] GM: roll blind fire, stright dice roll
[08:20:13] GM: call how many missiles
[08:20:19] CWO Sarah smirks a little, targetting and launching four missiles, with hope of a hit.
[08:20:22] CWO Sarah: [1d20] => = (4)
[08:20:27] CWO Sarah: (( aw crud ))
[08:20:49] GM: Fooooosh….thud…awwww
[08:21:00] CWO Sarah: (( *starts singing 'it sucks to be me'* ))
[08:21:05] GM: Suki
[08:21:10] GM: err nope
[08:21:12] GM: Komillia
[08:21:13] 2 LT Komillia: (failure is the only option…)
[08:21:50] 2 LT Komillia peaks over the dune, and fires her gunpod and two shoulder cannons at the revanant.
[08:22:01] GM: Roll strike
[08:22:05] 2 LT Komillia: [1d20+8] => [12,8] = (20)
[08:22:13] GM: roll damage
[08:22:46] 2 LT Komillia: [2d6*10] => 100 cannons, [1d6*10+8] => 28 gunpod
[08:22:54] 2 LT Komillia: ((128 total))
[08:23:41] GM: As you peek over the dune and take your pot shot at the Revenant you notice quite a bit. First, your shots while striking true are once again absorbed by that damnable energy shiled, but far more dire. you see no less then 15 Reavers moving towards the ridge line in cover formation with (roll perc)
[08:24:12] 2 LT Komillia: [1d20] => = (4)
[08:24:52] GM: You dont see much else since after seeing those 15 closing on your position you duck back down rather fast…but, as yu do so, a desert squirrel bounds past your view screen
[08:25:22] GM: Suki
[08:27:17] Lt. Ishida grits her teeth at the near miss. With no targets readily availbe to shoot at, she will link the unit in using the H Alpha'so C&C feature. (RSI [1d100] => = (88) vs 60%)
[08:27:24] Lt. Ishida: ((or not))
[08:27:42] GM: Gamjin
[08:28:54] 2 LT Gamjin fires a volley of ten missiles into the revanant.
[08:29:04] GM: roll blind fire
[08:29:09] 2 LT Gamjin: [1d20] => = (1)
[08:29:23] GM: roll 1d10
[08:29:31] 2 LT Gamjin: [1d10] => = (2)
[08:29:48] GM: Two of your missiles trike Komillia as she coms back down the dune, the other 8 go wild
[08:30:20] Lt. Jaron: (dang)
[08:30:34] 2 LT Komillia: You just love giving me reasons to hate missiles, don't you?
[08:30:44] GM: roll damage
[08:30:47] CWO Sarah: (( LOL ))
[08:31:02] 2 LT Gamjin: [4d6*10] => 130
[08:31:24] GM: friendly fire has the right of way!
[08:31:27] GM: Lurana
[08:31:27] CWO Sarah: (( lmao ))
[08:32:29] Lt. Fallnya will make a pass over the enmy makeing a scan and attempting to link and relay positions to the team
[08:32:35] GM: rather, incoming fire has the right of way, and friendly fire…ins't
[08:32:55] GM: roll it
[08:33:04] Lt. Fallnya: [1d100] => = (53) vs 63% normal i forget what the penalty was
[08:34:20] GM: -10%
[08:34:40] GM: so you just make it
[08:34:51] Lt. Fallnya: (k)
[08:35:09] GM: everyone gets the +2 strike and +3 dodge
[08:35:14] GM: Jaron
[08:35:59] Lt. Fallnya: (for adiditinal fly over)
[08:36:39] Lt. Jaron hits flank speed and passes over the enemy, twirling upside down once he passes.
[08:37:17] Lt. Jaron: (done; cause I cant stay in their long)
[08:37:55] Lt. Jaron: (pilot [1d100] => = (41) vs 80)
[08:38:17] GM: and now a stight % roll
[08:38:22] GM: stright
[08:38:24] Lt. Jaron: [1d100] => = (21)
[08:38:34] GM: The Cyc takes 21% damage
[08:39:01] Lt. Jaron: (k)
[08:39:53] GM: Behemoth
[08:40:19] 2Lt. Aylanea: (( eep ))
[08:40:29] Haydonite Commander: Cover the adavance!
[08:40:43] Haydonite: As you command!
[08:40:48] Haydonite: [1d20] => = (20)
[08:41:03] Haydonite: [1d12] => = (2)
[08:41:17] Haydonite: Sarah
[08:41:36] 2Lt. Aylanea: (( Nat 20: Rocks fall, everyone dies. ))
[08:42:01] 2Lt. Aylanea: (( actually, it's Aylanea, then Alleria's, then Sarah's. ))
[08:42:02] GM: Sarah roll some dice
[08:42:05] 2Lt. Aylanea: (( oh ))
[08:42:32] Sarah: [1d20+9] => [13,9] = (22)
[08:42:41] GM: roll a %
[08:42:49] Sarah: [1d100] => = (25)
[08:43:41] Sarah: (( distinct feeling I'm gonna need to teleport somewhere very soon ))
[08:43:59] GM: Sarah, maybe it's your training…maybe it's luck, but something tells you not to evn try to dodge, hell you dont even have time to pull the loud handle…(ejection lever) you just teleport!
[08:44:20] CWO Sarah: (( wow, guessed right. rofl. ))
[08:44:47] GM: EVERYone else however! See's the Beta hit dead center by the Behemoths Synchor cannon, when the beam passes, there is no Beta left.
[08:45:16] Lt. Ishida: SARAH!!!!!!!!!!!!!!!!!!
[08:45:35] 2 LT Komillia: Damn it!!!
[08:45:55] Lt. Ishida: Ay your turn
[08:46:36] Lt. Fallnya: "Damit!" *she turns to her Rio's direction "Alleria i want you to unload every thing you have into that Behemoth!"
[08:47:22] 2Lt. Madresa was just about to creep up the dune to fire at one of the ADA units, when she sees the Behemoth teleport. She swears softly at that, thinking very quickly and hoping it has to drop the shield a moment to fire, firing off twelve SRMs towards the Behemoth immediately, instead. [1d20+3] => [14,3] = (17)
[08:47:34] Lt. Jaron: (isnt there a behemoth and revenant or just the Behemoth?)
[08:47:35] 2Lt. Madresa: (( er, fire ))
[08:47:36] GM: teleport?
[08:47:44] GM: one each
[08:48:35] 2Lt. Madresa was just about to creep up the dune to fire at one of the ADA units, when she sees the Behemoth vape the Beta. She swears softly at that, thinking very quickly and hoping it has to drop the shield a moment to fire, firing off twelve SRMs towards the Behemoth immediately, instead. (for log)
[08:48:45] 2Lt. Madresa: (( there, better ))
[08:48:49] 2Lt. Madresa: (( just had a brain faret ))
[08:48:52] 2Lt. Madresa: (( fart ))
[08:48:59] GM: a brain ferret?
[08:49:09] GM: better then a brain squirrel
[08:49:15] GM: roll strike
[08:49:23] GM: it's not a tumor
[08:49:25] 2Lt. Madresa: (( I did on the initial botched thing ))
[08:49:32] GM: ah ha
[08:49:38] GM: roll damamge
[08:49:41] 2Lt. Madresa: (( too busy noticing that I put the wrong word in? ))
[08:49:50] GM: yes
[08:50:20] 2Lt. Madresa: [(2d6*10)*12] => 840
[08:52:22] GM: The revenge attack doesn texaclty pan out as planned, the haydonite behemoth did not need to drop shields to fire, however the missiles impacting overwhelm the shields and Ay can see slight, very slight, damage to the hull of the killer machine.
[08:52:39] 2Lt. Madresa: (( woot! ))
[08:53:17] GM: Haydonite ADA unit
[08:54:06] Haydonite: 1-2 Fire on that damaged Alpha lfying over… or 3-4 fire on that Legios that is flying over.
[08:54:08] Haydonite: [1d4] => = (2)
[08:54:13] Haydonite: [1d20+5] => [7,5] = (12)
[08:54:19] GM: jeroin
[08:54:31] GM: Jeroin! lol, like heroin…anyway, Jaron, roll dodge lol
[08:54:49] Lt. Jaron: (dodge [1d20+15] => [6,15] = (21))
[08:55:18] GM: Jaron, seems this enemy isn't going to leave you alone! But youa re able to squeek out of harms way, yet again!!!
[08:55:23] GM: Alleria
[08:56:29] Lt. Fallnya attempts to lock on and fire her full 50 missile payload at the Bohemith
[08:56:41] Lt. Fallnya: [1d100] => = (22) vs 50% RSI to lock
[08:56:54] Lt. Fallnya: [1d20+5] => [1,5] = (6) to fire full bonus
[08:57:00] Lt. Fallnya: (fuck)
[08:57:11] 2Lt. Madresa: (( rotflolmao ))
[08:57:13] GM: Looks like someone ELSE will be running lines with Suki
[08:57:41] GM: roll a 20
[08:57:46] Lt. Fallnya: [1d20] => = (1)
[08:57:58] Lt. Fallnya: (>.> err alleria isnt in a good day)
[08:58:02] 2Lt. Madresa: (( UBERFAIL x2! ))
[08:58:19] GM: You mishandle your weapon, sending your aim awry. Attempting to recover control of your weapon you forfeit any remaining actions
[08:58:29] 2Lt. Madresa: (( it's Alleria again, but acting under Lu's orders. ))
[08:58:39] GM: You missiles go willy nilly like water from a out of control fire hose!
[08:58:53] GM: Even the missiles dont know where they are going!
[08:59:09] Missile 1: Where the fuck are we going again?
[08:59:17] Missile 2: Hell if I know!
[08:59:22] 2Lt. Madresa: (( rotfl ))
[09:00:01] GM: Ahh true
[09:00:07] Lt. Alleria: "Shit shit shit, why does my missiles usually hate me!" *starts running programs to begin disconnection!
[09:00:11] GM: Running lines…running lines
[09:00:17] CWO Sarah: (( *giggles* ))
[09:00:37] GM: The remaining ADA unit fires at the missile spammer
[09:00:39] Lt. Jaron: (hehe)
[09:00:40] GM: [1d20+5] => [7,5] = (12)
[09:01:39] GM: thats you Lu
[09:01:56] Lt. Fallnya kicks on full AB to outrun the fire. [1d20+15] => [2,15] = (17)
[09:02:08] Lt. Fallnya: (mother fucker… im about to say i quit )
[09:02:08] GM: [4d4*10] => 110
[09:03:26] CWO Sarah: (( at least you're in one piece, and didn't just have a plane vaporized ))
[09:03:56] GM: oh by one point to
[09:04:05] GM: roll a % Lu
[09:04:45] Lt. Fallnya: (why shes not bad enough yet ish she?)
[09:04:48] Lt. Fallnya: [1d100] => = (66)
[09:04:52] GM: yes, by one point
[09:05:11] GM: Sever internal damage -1 attack, -2 init, -2 dodge
[09:05:19] GM: severe
[09:06:05] UEEF: [1d6] => = (5)
[09:06:22] UEEF: [1d6] => = (2)
[09:06:27] UEEF: [1d6] => = (6)
[09:06:32] UEEF: [1d6] => = (1)
[09:06:33] Lt. Fallnya: (that only affects her right?)
[09:06:36] UEEF: [1d6] => = (3)
[09:06:42] UEEF: [1d6] => = (4)
[09:06:48] UEEF: [1d6] => = (3)
[09:06:48] UEEF: [1d6] => = (2)
[09:07:10] GM: Just the Beta technically
[09:07:25] GM: since it's been the whipping boy in all of this
[09:07:42] GM: The UEEF missile strike impacts the haydonite camp!!!
[09:08:02] GM: [4d6*10] => 150
[09:08:02] GM: [4d6*10] => 190
[09:08:03] GM: [4d6*10] => 120
[09:08:03] GM: [4d6*10] => 180
[09:08:31] GM: [5d6*10] => 170
[09:08:31] GM: [5d6*10] => 190
[09:08:31] GM: [5d6*10] => 190
[09:08:31] GM: [5d6*10] => 210
[09:09:05] CWO Sarah: (( 1400 ))
[09:09:18] GM: Lurana as you fly out at max burner you see a decending missile fly within inches of your cockpit and then explode below in a plasma air burst
[09:09:38] GM: not all total sorry, the damage is spread out some
[09:09:45] CWO Sarah: (( oh. ))
[09:10:02] GM: Revenant
[09:10:14] Haydonite: Orders!?
[09:10:24] Haydonite Commander: HOLD THE LINE!!!!
[09:10:35] Haydonite: By your command!
[09:10:58] Haydonite: I will fire at Lu 1-2 Jaron 3-4
[09:11:05] Haydonite: [1d4] => = (2)
[09:11:26] Haydonite: ((Blind fire due to all the hate that just rained down on us.))
[09:11:29] Haydonite: [1d20] => = (11)
[09:11:52] GM: Lurana! Thos assholes still wont let you go!
[09:12:04] Lt. Fallnya continues full AB out of the area of fire prepariing for arial seperation "After seperation take cover with the others!" [1d20+15] => [1,15] = (16)
[09:12:09] Lt. Fallnya: (fuck)
[09:12:12] Lt. Fallnya: (i quit)
[09:12:16] GM: [4d4*10] => 90
[09:12:38] Lt. Fallnya: (thats 4 low rolls in a row )
[09:12:50] Lt. Fallnya: (Beta is at -400)
[09:12:56] Lt. Fallnya: (out of 550)
[09:12:57] GM: Beta is still in one piece tho!
[09:13:03] GM: roll a %
[09:13:45] CWO Sarah: (( everyone is having interesting rolls tonight, Lu. Otherwise Sarah might still exist right now. ))
[09:14:05] GM: Lay a % roll on me Lu
[09:14:14] Lt. Fallnya: [1d100] => = (7)
[09:14:51] 2 LT Komillia: ((Komi is and Sarah the only one who hasn't had a bad roll tonight I think))
[09:15:02] GM: Alleria…not Lu, the Beta starts to get double vision on its radar -8 to strike for beta weapon systems only
[09:15:33] GM: Sarah, um, we'll get back to you
[09:15:38] GM: Komillia
[09:16:07] 2 LT Komillia pops over the dune and fires another set of rounds from her cannons.
[09:16:14] Lt. Alleria will begin seperation sequence if were behind the first dune at least
[09:16:28] GM: roll it Komillia
[09:16:36] 2 LT Komillia: [1d20+9] => [8,9] = (17)
[09:16:39] GM: and then give me a series of 4 dodges
[09:16:42] GM: AD style
[09:16:53] GM: [1d20+5] => [18,5] = (23)
[09:16:53] GM: [1d20+5] => [15,5] = (20)
[09:16:53] GM: [1d20+5] => [1,5] = (6)
[09:16:54] GM: [1d20+5] => [14,5] = (19)
[09:16:54] GM: [1d20+5] => [12,5] = (17)
[09:17:02] GM: disregard last roll on that
[09:17:20] 2 LT Komillia: [1d20+10] => [8,10] = (18)
[09:17:23] 2 LT Komillia: [1d20+10] => [9,10] = (19)
[09:17:24] 2 LT Komillia: [1d20+10] => [16,10] = (26)
[09:17:27] 2 LT Komillia: [1d20+10] => [13,10] = (23)
[09:17:31] GM: kk
[09:17:46] GM: rolldamage as well
[09:17:55] GM: at the behemoth right?
[09:18:18] 2 LT Komillia: [2d6*10] => 110 cannons, [1d6*10+8] => 38 gunpod
[09:18:31] 2 LT Komillia: [2d6*10] => 50 cannons
[09:18:47] 2 LT Komillia: [1d8*10+8] => 68 gunpod
[09:19:02] 2 LT Komillia: ((the first wasn't a roll, it was a mis hit of an enter))
[09:19:13] GM: kk
[09:19:16] GM: and your target?
[09:19:33] 2 LT Komillia: ((revanant))
[09:19:36] GM: kk
[09:21:11] GM: The rounds impact and the shield hold it, but, if wathing your Super Roabot wars shows have taught you anything, the way the shiled took that last hit, means it is reaching it's limit!!!
[09:21:38] GM: Suki
[09:21:44] 2 LT Komillia: Your going to have to start dodging soon sucker.
[09:21:53] GM: Roll a moral check
[09:22:21] GM: [1d20] => = (6)
[09:22:34] GM: Suki is frozen in shock
[09:22:38] GM: Gamjin
[09:23:10] 2 LT Gamjin fires 16 missiles at the behemouth.
[09:23:17] GM: roll it!
[09:23:24] 2 LT Gamjin: [1d20+3] => [6,3] = (9)
[09:23:35] GM: oh, and Komillia
[09:23:43] GM: [1d4*10+15] => 45
[09:23:43] GM: [1d4*10+15] => 55
[09:24:08] GM: you take 100 points of damage from 2 of at least 4 Reavers cresting the ridge, advanceing on your position
[09:24:36] Haydonite: [1d20+2] => [11,2] = (13)
[09:24:40] Haydonite: [1d100] => = (52)
[09:25:45] GM: The Behemoth isn't out of tricks just yet! And as the missilearc over point defence guns swing into action, but are only able to fell helf the volly! roll damage for 8 missiloes
[09:27:23] 2 LT Komillia: [1d100+3] => [7,3] = (10)
[09:27:32] 2 LT Komillia: ((sorry, mis click))
[09:27:49] GM: was gonna say, if thats how you wanna roll it, you got screwed lol
[09:27:50] 2 LT Gamjin: (16d6*10)
[09:27:57] 2 LT Gamjin: [16d6*10] => 630
[09:28:29] GM: The behemoth takes the hits and armor plate flakes off but by all outward apperances, the machine looks combat ready!
[09:28:41] GM: Lurana
[09:29:40] GM does a rain dance to try and get Luranan's dice to roll above a 10
[09:30:05] Lt. Fallnya manually disconnects from allerai who bergins a dive towards the ground level to take cover, while lu swings around to lock her own missile payload on the bohemith
[09:30:26] Lt. Fallnya: begins a dive that is
[09:30:32] Lt. Fallnya: [1d100] => = (9)
[09:30:34] GM: nod nod
[09:30:40] Lt. Fallnya: vs 61 to lock
[09:30:42] GM: there you go, lower the better here
[09:30:57] Lt. Fallnya: [1d20
[09:31:01] Lt. Fallnya: [1d20+3] => [8,3] = (11)
[09:31:25] Lt. Fallnya: (Btw shes emptying her 60 missiles on the bohemith
[09:31:39] GM: you want to run lines TOO!!!! Your a MADMAN!!!!
[09:31:58] Lt. Fallnya: (no just call her crazy and wanting revenge)
[09:31:58] Haydonite: Eep!
[09:32:04] Haydonite: [1d20] => = (3)
[09:32:09] Haydonite: Oh Ffffffffffffff
[09:32:29] GM: Roll damage
[09:32:34] Lt. Fallnya: [120d6*10] => 120d6*10
[09:32:38] Lt. Fallnya: aww
[09:32:42] GM: lol
[09:32:45] GM: easy does it
[09:33:05] Lt. Fallnya: [2d6*10] => 100 multiply this by 60 missiles
[09:33:16] Lt. Fallnya: 6000 damage
[09:33:26] GM: o.O
[09:33:36] GM: And the quarter back is TOAST!
[09:34:36] Lt. Fallnya: (that should be the end of the bohemith and possibly any thing within a 60 ft radius that is if it didnt have a nasty reactor that went to)
[09:34:48] GM: Each and every single missile stirkes true! The barrage lasts but a few socond but seems to take forever! In the end the Behemoth is laid waste, split open by the vengefull assault!
[09:35:18] Lt. Fallnya: "EAt that shi and DIE you bastard!"
[09:35:24] Lt. Fallnya: shit*
[09:35:24] GM: Haydonites, roll a moral check!
[09:35:32] Haydonite: [1d20] => = (3)
[09:35:34] Haydonite: [1d20] => = (13)
[09:35:35] Haydonite: [1d20] => = (14)
[09:35:36] Haydonite: [1d20] => = (16)
[09:35:37] Haydonite: [1d20] => = (8)
[09:36:34] GM: Those Haydonite near the behemoth (that survived) begin to move back! Those ahead of it, continue on, pressing forward guns ablaze!
[09:36:39] GM: Jaron
[09:37:29] Lt. Jaron feels then sees the destroyed behemoth and transforms to guardian and picks up his cyclone since his plan isnt needed now (done)
[09:37:43] GM: Ay
[09:37:58] GM: You can see Haydonite reavers on the dune crest 8 of em!
[09:38:11] 2Lt. Aylanea: (( is the Revenant still there? ))
[09:38:30] GM: You cannot see it
[09:38:35] 2Lt. Aylanea: (( okie ))
[09:38:41] GM: it is behind the dune, if it is still alive
[09:39:15] 2Lt. Aylanea cheers into the radio as she sees the explosion of the Behemoth. "Good shot, ma'am!" She locks in four of the reavers, and lets them taste four missiles each.
[09:39:15] GM: You do however see a massive cloud of dust and debris and all of you are being peppered by small chunks of Behemoth right now
[09:39:26] 2Lt. Aylanea: [1d20+3] => [7,3] = (10)
[09:39:32] GM: kk, you are locking on
[09:39:39] GM: Alleria
[09:40:12] Lt. Alleria dives for cover behind the 2nd dune
[09:40:24] Lt. Alleria: (thats all i can do i think im out of attacks
[09:42:24] Narrator: As Aleria lands her damamged Beta behind the forward dune, the Haydonites gambit srpings into action, cresting the dune 15 reavers begin their push as to the right and left flanks another 10 reavers each begin to push on the clustered UEEF forces! The marines in the rear split their focus, and start to lay down covering fire!
[09:43:32] Narrator: Komillia
[09:44:49] 2 LT Komillia fires her gun and cannons at the cresting creepy-crawlies.
[09:45:02] GM: roll it!
[09:45:02] 2 LT Komillia: ((the nearest one anyway.))
[09:45:09] GM: kk
[09:45:13] 2 LT Komillia: [1d20+9] => [17,9] = (26)
[09:45:20] GM: [1d20+4] => [2,4] = (6)
[09:45:24] GM: roll damage
[09:45:27] 2 LT Komillia: [2d6*10] => 80 cannons
[09:45:34] 2 LT Komillia: [1d8*10+8] => 18 gunpod
[09:46:23] GM: 98
[09:47:02] GM: The Reaver staggers and goes to a knee, but is still combat effective! Ay, Gamjin and Komillia, you are each fired at by 5 Reavers
[09:47:52] Haydonite: ((rolling in packs of 5))
[09:48:05] Haydonite: ((ay, Gam and at Komillia in that order))
[09:48:08] Haydonite: [1d20+4] => [17,4] = (21)
[09:48:08] Haydonite: [1d20+4] => [20,4] = (24)
[09:48:09] Haydonite: [1d20+4] => [2,4] = (6)
[09:48:56] GM: Roll your dodges and or declare your actions
[09:49:09] 2Lt. Aylanea eeks and tries to move out of the way.
[09:49:11] 2Lt. Aylanea: [1d20+14] => [5,14] = (19)
[09:49:19] 2 LT Komillia: [1d20+10] => [3,10] = (13)
[09:49:34] 2 LT Komillia experiences dodge fail.
[09:49:38] GM: [1d4*10+15] => 25
[09:49:39] GM: [1d4*10+15] => 35
[09:49:39] GM: [1d4*10+15] => 35
[09:49:40] GM: [1d4*10+15] => 35
[09:49:40] GM: [1d4*10+15] => 35
[09:49:44] Lt. Alleria: (komi didnt have to dodge technically)
[09:49:46] GM: At Ay
[09:49:56] GM: [1d4*10+15] => 45
[09:49:56] GM: [1d4*10+15] => 35
[09:49:57] GM: [1d4*10+15] => 25
[09:49:57] GM: [1d4*10+15] => 25
[09:49:58] GM: [1d4*10+15] => 45
[09:50:01] GM: at Komi
[09:50:09] GM: [1d4*10+15*2] => 70
[09:50:10] GM: [1d4*10+15*2] => 70
[09:50:10] GM: [1d4*10+15*2] => 40
[09:50:11] GM: [1d4*10+15*2] => 50
[09:50:11] GM: [1d4*10+15*2] => 50
[09:50:14] GM: at gamjin
[09:50:43] 2 LT Komillia: ((actually she's right, the haydonity rolled a two, I didn't see that.))
[09:51:21] 2 LT Gamjin dodges!
[09:51:23] 2 LT Gamjin: [1d20+9] => [7,9] = (16)
[09:51:32] 2Lt. Aylanea: (( *subtracts her damage* ))
[09:52:06] 2 LT Gamjin acknowledges the smoking holes in his Condor.
[09:52:42] GM: Suki
[09:55:07] Lt. Ishida blinks back the tears and swallows a lump the size of a factory satellite. You will mourn her later, the other needs you now! Her bushi upbringing takes hold and she surveys the field. Seeing the encroaching Reavers on the ridge, she fires a barrage of 10 missiles at the crest. (Tactics [1d100] => = (57) vs 60) (strike [1d20+3] => [2,3] = (5))
[09:55:34] 2Lt. Aylanea: (( still in grief, missed. ))
[09:55:38] 2Lt. Aylanea: (( ))
[09:55:47] GM: Sukis missiles fail to hit their targets but the fireworks display is impressive enough….
[09:56:25] GM: However Suki, you are able to get enough of a picture to realize that you and your unit are about to be enveloped in a clasic pincers movement very soon!
[09:56:56] Lt. Ishida: Hitman team…
[09:57:03] Lt. Ishida her voice cracks.
[09:57:19] Lt. Ishida: Hitman team! We need to pull back, reform the line, we are about to be enveloped!!!
[09:57:39] GM: Gamjin
[09:59:08] 2 LT Gamjin realizes he only has one missile left and fires on the revanant with his gunpod.
[09:59:12] Lt. Fallnya: "All unitrs fall back ill cover go!"
[09:59:29] GM: roll it
[09:59:32] 2 LT Gamjin: [1d20+7] => [15,7] = (22)
[09:59:46] GM: roll damage
[10:00:00] 2 LT Gamjin: [2d4*10+8] => 68
[10:00:33] GM: The shield drops!!! And the Revenant takes paint damage
[10:00:56] GM: Luranan
[10:00:59] GM: Lurana
[10:02:32] Lt. Fallnya changes to battloid and begins fireing thrusters to lower her self on the heads of a few reavers fireing her twin gunpods at weaker apperaing ones [1d20+5] => [2,5] = (7) vs 1 target [1d20+5] => [8,5] = (13) vs 2nd target
[10:02:50] GM: [1d20+4] => [3,4] = (7)
[10:03:02] GM: You are able to hit one target you miss the first
[10:03:41] GM: rol damamge
[10:04:14] Lt. Fallnya: [(3d4*10)+30] => 90
[10:04:16] GM: Jaron, roll a nav roll
[10:04:56] GM: Lurana, the target you hit is tossed onto its back, but you know it's still alive!
[10:05:34] Lt. Jaron: [1d100] => = (42) vs 64
[10:05:50] GM: Youare able to locate your Cyclone, and scoop it up, your action
[10:06:58] Lt. Jaron skims slightly above the terrain as he heads towards the rear of the once Haydonite nest, firing a burst from his gunpod at the hind quarters of the Revenant (If possible; to strike [1d20+8] => [7,8] = (15) dmg [(2d4*10)+30] => 60)
[10:07:43] GM: You hit the revenant and it's leg only slightly buckles, but being four legged, it's balance isn't greatly affected.
[10:07:49] GM: Ay
[10:08:57] 2Lt. Aylanea hears the pull back order as she was locking, sighing and moving to pull back as ordered.
[10:08:59] Lt. Fallnya: "I repeat all units fall back and regroup"
[10:09:10] GM: Alleria
[10:09:54] UEEF: Battery command! You guys need a third shot or what?
[10:11:09] Lt. Fallnya: "We could if you can hit just behind the ridge at coordinates [1d100] => = (93) vs 61% land nav
[10:11:20] UEEF: Roger that!
[10:11:21] Lt. Alleria: (i assume im still out of attacks?
[10:11:27] GM: ewwww
[10:11:39] GM: Depends on what you want to do
[10:11:46] Lt. Alleria changes battloid and falls back
[10:11:59] GM: Haydonite ADA your turn
[10:12:07] Lt. Alleria playing leap frog over the dunes trying to get to the 2nd defense line
[10:12:18] Haydonite: You know, I' am SICK of that damn Jaron flyin over us all damn night! I shoot at him!
[10:12:28] Haydonite: [1d20+5] => [5,5] = (10)
[10:12:38] GM: And revenant
[10:12:57] Haydonite: You know what, yeah, me too! Damn UEEF hotshot!
[10:12:59] Haydonite: [1d20+5] => [12,5] = (17)
[10:13:38] Lt. Jaron tries to dodge ([1d20+15] => [9,15] = (24))
[10:13:38] CWO Sarah: [1d6] => = (5)
[10:13:40] GM: Jaron…they pissed at you!
[10:13:55] GM: and a second dodge
[10:14:04] Lt. Jaron tries to dodge again ([1d20+15] => [12,15] = (27))
[10:14:19] Haydonite: FUCKING hax!
[10:14:29] GM: hey, he beat you fair and square
[10:14:40] Lt. Jaron: (haha)
[10:14:41] Haydonite: Still hax!
[10:14:47] That's the Tab key, Dave
[10:14:52] CWO Sarah: (( rofl ))
[10:15:12] GM: Sarah, you recombobulate in the spot you last played cards, glowing, and nekkid
[10:15:23] CWO Sarah: (( lol ))
[10:15:29] GM: oh, and it's your turn to, fancy that…
[10:16:22] CWO Sarah looks around as she pops back up there, sighing softly and trying to put her uniform back on from the clothes she knows how to adjust herself into. Then looks around to see if anyone is around at the camp, even before she stops glowing.
[10:17:00] GM: About 15 poepl notice your rather 'flashy' HAA i'm so punny, apperance
[10:17:27] CWO Sarah: (( *smacks for that joke* ))
[10:17:36] GM: yeah, I deserved that
[10:17:46] CWO Sarah: (( corny cornycorny ))
[10:17:58] GM: what, and waste A list meterial here?
[10:18:04] GM: anyhoo
[10:18:58] GM: A few marines move towards you in CVR-3 etc , by the time the approach you have donned your uniform and your glow has faded.
[10:19:01] GM: Komillia
[10:19:31] 2 LT Gamjin engages the booster and starts pulling out.
[10:19:37] GM: Suki
[10:20:19] Lt. Ishida will fire another 10 missiles to cover the retreat. [1d20+3] => [14,3] = (17)
[10:20:38] Haydonite: [1d20+4] => [5,4] = (9)
[10:20:38] Haydonite: [1d20+4] => [18,4] = (22)
[10:20:39] Haydonite: [1d20+4] => [6,4] = (10)
[10:20:40] Haydonite: [1d20+4] => [18,4] = (22)
[10:20:43] Haydonite: [1d20+4] => [19,4] = (23)
[10:20:44] Haydonite: [1d20+4] => [17,4] = (21)
[10:21:08] Haydonite gives suki a raspberry NYA! you suck
[10:21:54] GM: [2d6*10] => 60
[10:21:54] GM: [2d6*10] => 70
[10:21:57] GM: [2d6*10] => 90
[10:21:57] GM: [2d6*10] => 70
[10:22:37] GM: The missiles that actually hit do moderate damage, but otherwise the encroaching tide of haydonite forces are only blinded visually by the thrown up smoke and dust
[10:22:43] GM: Gamjin
[10:23:19] 2 LT Gamjin follows after Komi with as much speed as the Condor can muster.
[10:23:48] GM: Lurana
[10:24:38] Lt. Alleria upon landing in the haydonmists she goes full auto belting at 2 diffrent haydons heads at close ranges [1d20+2] => [6,2] = (8) takeing additional -3 for head shots [1d20+2] => [11,2] = (13)
[10:25:06] Haydonite: [1d20+5] => [14,5] = (19)
[10:25:07] Haydonite: [1d20+5] => [14,5] = (19)
[10:25:23] Haydonite: Nya! You couldnt hit a Factory stallaite if you where INSIDE it!
[10:25:37] Lt. Alleria: (mabie ill just kick one
[10:25:38] GM: Jaron
[10:25:43] Lt. Jaron swivels and strafing left towards the battle lines,slightly facing the Revenant he then fires a clusters of missiles; one at the Revenant the other at the middle Reaves on the ridge each getting 8 missiles (to strike [1d20+5] => [19,5] = (24))
[10:25:55] GM: dayumm
[10:26:03] GM: there we go on the stiek rolls
[10:26:09] GM: strike
[10:26:23] Haydonite: [1d20+3] => [10,3] = (13)
[10:26:31] Haydonite: [1d100] => = (39)
[10:26:48] GM: Roll damamge on the missiles on the reavers,
[10:27:03] GM: The Revenant is able to detonate the volly of incoming with point defence fire
[10:27:12] Lt. Jaron: Reaves [(2d6*10)*8] => 640
[10:27:21] GM: rolla d10
[10:27:27] Lt. Jaron: [1d10] => = (3)
[10:27:53] GM: 6 ravers die, mainly those hit previously plus 3 freshies
[10:28:04] GM: Ay
[10:28:17] Lt. Jaron radios "Clearing some of them up for you guys"
[10:28:48] 2Lt. Aylanea straightens up from the fallback point and tries to sight in four reavers from where she is, hoping to deliver four missiles each to all of them.
[10:29:00] GM: Alleria
[10:30:05] GM: good point, roll strike Ay
[10:30:09] 2Lt. Aylanea: [1d20+3] => [11,3] = (14)
[10:30:24] Haydonite: [1d20+3] => [8,3] = (11)
[10:30:24] Haydonite: [1d20+3] => [4,3] = (7)
[10:30:25] Haydonite: [1d20+3] => [3,3] = (6)
[10:30:25] Haydonite: [1d20+3] => [5,3] = (8)
[10:30:29] Lt. Alleria just continues to reatreat (i assume i still have no attacks if i can ill burst fire covering my own reatreat)
[10:30:32] GM: roll damage
[10:30:58] 2Lt. Aylanea: [(2d6*10)*4] => 200
[10:31:03] 2Lt. Aylanea: [(2d6*10)*4] => 240
[10:31:04] 2Lt. Aylanea: [(2d6*10)*4] => 280
[10:31:05] 2Lt. Aylanea: [(2d6*10)*4] => 160
[10:31:17] Captcha (enter): 22:31
[10:31:34] whispering to Captcha, hiya :)
[10:32:06] GM: Your missile volly wipes three of the four you fire on, the fourth survies, but is thrown from the ridge and yumbles down the backside of it
[10:32:29] GM: Sarha
[10:32:32] GM: Sarah
[10:36:05] CWO Amdahl looks up at the few marines moving towards her, sitting up and sighing, rubbing her head just a bit and glancing around to see if she recognizes anyone from the game or anywhere. She tries to put on her most sweet smile, and most innocent look she can really muster. ([1d100] => = (39) trust vs. 94% with probably heavy penalty considering (if allowed at all)). "Well, that was unfun." She looks over to one. "Remind me not to end up in the line of fire of a Synchro Cannon again. Thanks."
[10:36:50] GM: One of the Marines recognizes you as the orange haired pilot.
[10:37:11] UEEF: Um…Ma'am…what are you doing back here?
[10:37:42] UEEF: Komillia
[10:37:56] GM: Komillia
[10:39:03] 2 LT Komillia fires back at the nearest haydonite while retreating.
[10:39:13] 2 LT Komillia: [1d12+9] => [12,9] = (21)
[10:39:13] UEEF: Hitman flight! Razor Actual here! You guys look like you coud use some help!
[10:39:28] Haydonite: [1d20+4] => [6,4] = (10)
[10:39:36] GM: Roll dmamge
[10:40:16] Lt. Fallnya: "Cover our ground forces retreat, ive take out there big guns"
[10:40:20] 2 LT Komillia: [1d8*10+8] => 38 gunpod
[10:40:35] UEEF: Roger that! Pull on back, we got this!
[10:41:29] GM: Your gunpod strike tears a slight chunk of meat from the Haydonites armor, but the damn thing still moves forward firing as he does so.
[10:41:37] GM: Suki!
[10:42:52] Lt. Ishida: Razor actual! Feeding you comabt data! Enemy on the ridge ahead and to the south and north! We have friendly dismounts to our center! [1d100] => = (47) vs 60%
[10:43:28] UEEF: I see it, we have your locs fixed! Move on out, you are in a bad way!
[10:43:47] GM: Gamjin
[10:44:33] 2 LT Gamjin fires a blast from his EU-12 at the same Haydonite Komillia fired on.
[10:44:38] 2 LT Gamjin: [1d20+7] => [2,7] = (9)
[10:44:39] GM: roll it
[10:44:47] GM: Big WIFFAH!
[10:45:05] GM: Lurana
[10:46:03] Lt. Fallnya fires her thrusters hopping away still trying to pop 2 haydons at once fireing at one she had damaged and he one komi had
[10:46:15] Lt. Fallnya: [1d20+5] => [8,5] = (13) [1d20+5] => [15,5] = (20)
[10:46:28] Haydonite: [1d20+4] => [16,4] = (20)
[10:46:29] Haydonite: [1d20+4] => [13,4] = (17)
[10:46:45] GM: You miss #1 but hit #2
[10:46:47] GM: roll damage
[10:47:14] Lt. Fallnya: [(3d4*10)+30] => 80
[10:47:44] GM: You pitch the bastard back hard over and lose sight of him as the dust from your rounds kick up a cloud
[10:47:50] GM: Jaron
[10:49:05] Lt. Jaron skims backwards and fires another cluster at the ridge Reaves and Revenant again, another 8 (to strike [1d20+5] => [12,5] = (17))
[10:49:21] GM: pick one
[10:49:24] GM: reaver or revanant
[10:49:32] Lt. Jaron: (revenant)
[10:49:42] Haydonite: [1d20+4] => [3,4] = (7)
[10:49:50] Haydonite: Oh for…..you suck!
[10:49:58] GM: roll damage
[10:49:58] Lt. Jaron: [(2d6*10)*8] => 240
[10:50:40] GM: The revenant takes the missile to the dome! But the Haydonite construction shows it's mettel this day and the revenant is still alive and kicking!
[10:50:44] GM: Ay
[10:51:34] 2Lt. Aylanea: (( hrm. how many ridge reavers left? ))
[10:51:47] GM: 5 center, 10 each N and S
[10:52:35] 2Lt. Aylanea locks up four more reavers, this time taking aim for the south flank, launching a quartet of missiles at each once again, to thin the competition out. [1d20+3] => [2,3] = (5)
[10:52:38] 2Lt. Aylanea: (( crap ))
[10:53:29] GM: Your missiles fly off, like so many others to join a commune
[10:53:45] CWO Sarah: (( lol ))
[10:53:53] UEEF: Light em up boys!
[10:54:01] UEEF: [1d20+3] => [8,3] = (11)
[10:54:01] UEEF: [1d20+3] => [17,3] = (20)
[10:54:02] UEEF: [1d20+3] => [18,3] = (21)
[10:54:02] UEEF: [1d20+3] => [8,3] = (11)
[10:54:02] UEEF: [1d20+3] => [14,3] = (17)
[10:54:03] UEEF: [1d20+3] => [3,3] = (6)
[10:56:06] Narrator: As the Hitman team falls back with their Marien LRRP in tow Razor Flight overflies them hugging the deck and looses a barrage of missiles each at the encroaching enemy! The tally of dead is lost in the conflagration but for now, the Marines of Hitman flight have their rear well covered and are clear of combat…for now.
[10:56:39] Lt. Mitchell: Hitman flight say status over!
[10:57:19] Lt. Fallnya: "One mecha down, low missiles across the board, falling back to set up 2nd defensive line"
[10:57:43] Lt. Jaron takes off south and soon circles around to regroup
[10:58:19] Lt. Mitchell: Roger that! Change of plans! You are to fall back to Point R! Damocles just called in, they are moving off station, something big is going down on orbit! Command needs all avaialble units at R now!
[10:58:34] CWO Sarah sighs very softly as she glances over to the Marine, shaking her head slightly. "I died, just didn't quite make it to anywhere, so got dumped back here. Hitman Actual will confirm that, if they're still okay. I'm hoping that after the Beta got vaporized, they might have made a smart move." She looks around for a mech or other radio. "Need to contact them though…"
[10:59:03] UEEF: You…died? ma'am you look fine to me…
[10:59:22] Lt. Jaron soon follows the others to Point R
[11:00:15] Lt. Fallnya: "Roger, Hitman flight pick up the marines and proceed to point R guardian formation, Alleria pick up what mrines you can in your cargo bay and follow in flight mode"
[11:00:20] Damocles (UES Yukikaze): skkkzzzzx…flight. Hitman flight…skkkkxxxx…in! Respond!
[11:01:02] Lt. Fallnya: "Damoclese your breaking up attempting to boost signal
[11:01:18] CWO Sarah shrugs a little bit and shakes her head. "Long story. Though not really… A radio…" she sighs a little bit.
[11:01:18] Lt. Mitchell: Summerwind! Get on it!
[11:01:21] Lt. Fallnya: [1d100] => = (15) vs 78%
[11:01:35] LCpl Summerwind: yeah yeah quit your bitching…sir.
[11:01:39] LCpl Summerwind: [1d100] => = (13)
[11:01:54] Damocles (UES Yukikaze): Hitman flight, come in over.
[11:01:55] Lt. Fallnya: (oh i thought that was directed to us my bad)
[11:02:10] Lt. Fallnya: "Hitman Actual here"
[11:03:04] Lt. Alleria changes modes and lands to pick up any marines that want a ride in her cargo bay
[11:04:16] Damocles (UES Yukikaze): We cannot support you from here on, Haydonite fleet inbound, you are to protect Point R , follow commands orders. Be advised, we monitored your engagement in the desert, the cpatian and Major send their congradualtions, and their condolances. Enemy forces ahve been divereted, they are no longer, repeat, no longer approaching from the east, we count three count three, Fantoma crusiers NOW approaching over the badlands! How copy over
[11:05:00] Lt. Alleria: "Roger we copy that were gona need bigger guns if you want us to take out Cruisers"
[11:05:16] 2Lt. Aylanea just follows along listening, with her damaged Alpha.
[11:05:28] Lt. Alleria: (err woops)
[11:05:29] Lt. Fallnya: "Roger we copy that were gona need bigger guns if you want us to take out Cruisers"
[11:05:29] Damocles (UES Yukikaze): Negative, just follow commands orders!
[11:05:45] Damocles (UES Yukikaze): Good luck and gods speed!
[11:06:54] Narrator: In the background you can hear the cpatians stern voice "Shiled up! Gunnery control, I want your stations on line! Set condition 1 SQ thruout the ship!!"
[11:06:58] Lt. Fallnya switches radio to command at point R "Command, Hitman Actual were picking up our marine gound teams and proceeding in repairs needed and reload as fast as possible"
[11:07:58] Lt. Fallnya switches to guardian and picks up a marine gently any trhat dont happen to get aboard the beta for a fly back
[11:08:26] UEEF: Roger that hitman flight, we have you on fianl, follow the beacon and we will resupply, Be advised, any combat capable craft need to take up a CAP posture, we have massive incoming!
[11:08:48] CWO Sarah gets aboard the Beta basically, since she isn't able to do a whole lot else at this point.
[11:09:03] Lt. Fallnya: "Im combat effective except bingo missiles"
[11:09:31] Lt. Jaron is headed in to land
[11:09:43] 2 LT Komillia: We get to set down somewhere I can transfer my nearly full rack to someone who'll use them.
[11:10:01] Lt. Fallnya manuvers to land setting down marien and changeing to alpha and lands with Vtol
[11:10:11] Narrator: As the craft of Hitman flight are landed their damage, becoems apparant, and the ground crews perform 'triage' Jaron, Ay, Gamjin, Alleria, and Komillia are issued new mecha
[11:10:37] Narrator: Suki and Lurana are reloaded
[11:10:37] CWO Sarah: (( are the marines offloaded? ))
[11:10:47] Narrator: ((yes))
[11:10:58] Lt. Alleria cries as shes gona have to learn the quirks of a new craft
[11:11:37] Narrator: During the re-issue and re-arming, as as Sarah is being driven to Point R, all feel the ground quake in a minor earth quake.
[11:11:58] Lt. Jaron: "That cant be good"
[11:12:25] Lt. Alleria: "Fuck… any chance you guys have those new super conversion fast packs?" *she grumbles at the techs
[11:12:29] Narrator: Around yu the city, is deserted, the only folks about are UEEF military.
[11:12:33] Lt. Alleria: (err woops)
[11:12:41] Lt. Fallnya: "Fuck… any chance you guys have those new super conversion fast packs?" *she grumbles at the techs
[11:13:07] CWO Sarah: (( afk a few ))
[11:13:10] UEEF: Sorry ma'am what you see is what you get! We'll have you fully armed in no time!!
[11:14:19] CWO Sarah: (( did anyone other than the GM see me pose getting into the Beta for a ride after it was reported as being landed to pick up people? ))
[11:14:37] Lt. Fallnya: (that was in the desert…)
[11:14:38] GM: Sarah, the jeep you are in enters point R, and you are immedialty struck by the fact that the roadblock checkpoint is unmanned, and the jeep progresses further into the city you pass larger UEEF units as they are making their way towards the city center.
[11:14:45] CWO Sarah: (( oh ))
[11:14:47] CWO Sarah: (( nevbermind then ))
[11:14:51] GM: Yes, I disregarded it
[11:15:10] CWO Sarah: (( that wasn't made clear since marines were at both locations. sorry. ))
[11:15:20] GM: moving n
[11:15:38] Lt. Jaron looks over his new Red Alpha and checks the gunpod along with system checks
[11:16:20] GM: it's not red
[11:16:34] GM: It's the same Dark bluegrey, but with the head unit of a Z
[11:16:35] Lt. Fallnya grumbles and folds arms
[11:16:49] 2 LT Komillia: ((But red paintjobs go three times faster!))
[11:16:50] Lt. Jaron: (aww I wanted red )
[11:16:54] GM: WAAAAAGH!
[11:16:56] CWO Sarah looks out at the jeep enters the point, once again looking around through the window for either her squadron, or the radio she asked the Marines about before they left, wanting to get back to where she needs to be.
[11:17:39] Lt. Fallnya: (heh mines black/purple thanks to customization and lack of hits >.> sorry alleria)
[11:17:49] Lt. Alleria: (*just grumbles*)
[11:18:03] UEEF MP looks at Sarah. Then continues to drive to Point R's arifield
[11:18:10] Lt. Jaron gets a bite to eat from rations, figuring it might be awhile for others sources
[11:18:51] Lt. Ishida sits in her mecha, as it is reloaded and minor repairs commence. She is despondant, but goes about her preflight with professionalism
[11:18:53] Lt. Alleria sombers into her new fighter waving a fond good bye to her other one and tries to start up this one
[11:19:26] CWO Sarah sighs, and just shakes her head. She closes her eyes, trying to see if Suki is in any range she can sense, to reach out to mentally. *Suki?*
[11:19:41] UEEF MP brings the jeep to a halt and look sover at Sarah. "Here you go ma'am. Good hunting!"
[11:20:14] Lt. Fallnya climbs into her alpha and begins preflight checking fuel level, and missile loadout
[11:20:22] CWO Sarah glances up and smiles, standing up. "Thank you! and I hope so… Hope they have a spare Beta, mine is totally gone…" She laughs a little and heads for the squad.
[11:20:24] Lt. Ishida hears …something…but pushes it aside, it's only her mind playing tricks.
[11:20:47] CWO Sarah: (( lol ))
[11:21:10] GM: Hitman squad sans Suki and well…Suki, roll a perc
[11:21:22] 2 LT Komillia: [1d20] => = (12)
[11:21:23] Lt. Fallnya: [1d20] => = (19)
[11:21:31] Lt. Jaron: [1d20+2] => [1,2] = (3)
[11:21:40] CWO Sarah: (( ooh, a squirrel! ))
[11:21:45] Lt. Alleria: [1d20+1] => [3,1] = (4)
[11:22:00] GM: Sware to god, Tirol is chock full of squirrels
[11:22:06] Lt. Jaron: (yea, its got big nuts too)
[11:22:10] 2Lt. Aylanea: [1d20] => = (1)
[11:22:22] Lt. Jaron: (haha Ay saw the nuts too)
[11:22:23] 2Lt. Aylanea: (( *rolls* ))
[11:23:25] GM: Lurana, you see her first, Sarah running toards the flight line, Komillia, you see her seconds after, the rest…you see only squirrels, OH GOD WHY! they are everywhere!
[11:23:50] 2Lt. Aylanea: (( squirrels with big nuts. Yup ))
[11:24:10] Lt. Fallnya: "Well whell i figured she didnt die so easally, Suki look to your left (she speaks on short wave helmet radio)
[11:24:30] Lt. Ishida: Hmm? Nani?
[11:24:51] 2 LT Komillia: Gee, she has more lives than a cat.
[11:25:08] Lt. Ishida sees Sarah and her eyes bulge! She drops her data pad and using all four limbs scramble sout of her cockpit and tackles Sarah!
[11:25:48] Lt. Ishida hugs Sarah and starts babbling in japanese peppering her face with kisses, holding her even tighter.
[11:26:05] 2 LT Komillia: Somebody get a camera! This'll get lots of money on Pay-Per-view!
[11:26:16] CWO Sarah goes crashing right over as she's tackled, blushing and hugging tightly on the ground. "Sorry for vanishing like that. I tried to find a radio and call sooner…" She blushes at all the kisses, kissing back and hugging tight happily.
[11:26:36] Lt. Fallnya: "Yiesh get a room you two, We got CAP to pull"
[11:26:51] Lt. Fallnya if shs ready she throttles up and begins to take off V-tol
[11:26:54] Lt. Ishida looks up and nods, then stands, dusts herself off
[11:27:19] CWO Sarah frowns at that. "I don't have a jet though, and am probably listed as dead or something, if anyone reported before now."
[11:27:26] Lt. Ishida: Sumimasen Taicho-san.
[11:27:30] Lt. Ishida says to Lurana
[11:28:13] Lt. Fallnya: "Your gona have to speak english or zentradi Ishida"
[11:28:18] CWO Sarah: (( lol ))
[11:28:49] 2Lt. Aylanea blinks over at the shortwave convo. "Huh. Didn't see her punch out…"
[11:28:50] Narrator: As fianl preperatins are made to the Hitman teams new mecha, sarah is issued a beta, no one was going to say no to a pissed of Zentreadi female squadleader…the sounds of airraid sirens blare overhead.
[11:28:53] Lt. Jaron watches the two embrace and smiles slightly
[11:30:08] Lt. Jaron runs to get to his alpha
[11:30:10] CWO Sarah looks to Suki quickly as the air raid siren sounds, snuggling her quick and jumping for the Beta cockpit. "Guess it's back to work then."
[11:30:35] Lt. Ishida: If you scare me like that again, I'm braking up with you!
[11:31:23] Lt. Fallnya: "All fighters launch soon as ready, battloids… im not sure where command wants you two deployed sence you cant keep up with us…"
[11:31:43] GM: Poor Gamjin, the dan UEEF ran out of Condors…gues syou have to fly a Vindicator instead…oh, the shame…
[11:32:06] Lt. Fallnya: (lucky ass)
[11:32:14] CWO Sarah laughs a little at that, checking for CVR and a helmet to put on. "I'll try not to. I had to jump blind, ended up at the camp from this morning."
[11:34:11] UEEF: Hitman FLight, welcome to the CAP. Your air assets are to join the exsisting CAP, we have inbound aricraft from the east. Your ground units, form up on the overpass and provide support from there.
[11:35:47] Lt. Fallnya: "All ighters remain seperated for now,
[11:36:10] GM: There is your basic playout
[11:36:14] GM: layout
[11:36:26] Lt. Fallnya: (ok)
[11:36:32] Lt. Jaron soon is ready to fly
[11:36:34] Lt. Fallnya: (what are the 3 red dots?)
[11:36:38] GM: Maroon dots are where they want the ground units
[11:36:48] Lt. Fallnya: (ok)
[11:36:55] GM: the dark grey lines are highways
[11:37:12] GM: brown circle is city center
[11:37:23] GM: grey are city districts
[11:37:58] GM: now, everyone remember what Damocles said?
[11:38:04] GM: kk good
[11:38:27] Lt. Fallnya: (obey command and stay alive lol)
[11:39:10] CWO Sarah: (( 3 cruisers approaching from badlands. ))
[11:40:00] Lt. Jaron takes a breath and hopes luck favors them this time
[11:40:30] Lt. Fallnya: "Hitman flight launching deploying C&C systems and praying to gods…." [1d100] => = (98) vs 61 RSI
[11:40:37] Narrator: As the Bioroids and Vindicator take up positions with other mecha of their like upon the overpass they can see roadblocks and other defensive positions sandbagged along the length of the road. In the air, Hitman flight passes sqads of Alphas in guardian mode hovering near the cities sky scrapers as other Alphas have taken up position on building roofs some ready to fly off, other ready to hold that particular poistion. In the air you form up with other CAP fighters, and off to your east, you begin to get a hint of whats coming your way….
[11:40:38] CWO Sarah: (( lol ))
[11:40:48] Lt. Fallnya: (and i think i just fried my C&C system x.x)
[11:41:35] 2Lt. Aylanea watches quietly as she forms up, frowning as she looks eastward.
[11:41:43] GM: >.< dont roll unless it's your turn okay>
[11:41:50] GM: dont roll unles sit's your turn okay
[11:42:05] Lt. Fallnya: (ok fine ill ignore the roll then)
[11:42:08] Lt. Fallnya: (if you will)
[11:42:24] GM: I wouldn't have counted it if it were a 1 so no…
[11:43:37] Lt. Alleria looks around for any other bricks not attatched other than Sarah's
[11:43:38] GM: Off to your east you see a morning sky, a sunrise, increadibly beautiful, and soothing, if not for the swarm of black shapes growing larger, approaching from the east.
[11:44:37] GM: Like I told Lu, Sarh, your beta is loaded with 4 pylons, MRM loads, 3 missiels per
[11:45:47] 2Lt. Aylanea: (( cool ))
[11:46:13] CWO Sarah: (( what about a CVR inside the cockpit for the pilot? ))
[11:46:17] Lt. Fallnya: (additnal pylons on the alphas or no?)
[11:46:18] CWO Sarah: (( I'm assuming? ))
[11:47:08] Narrator: As you join the UEEF fighters already on station, you start to get telemtry from Aegis craft already in flight, your radar que soon clogs up. With far to many tracks to handle, your radar systems start to prioritize , only displaying the 'closest 50 targets, those in an H model are only able to track the closest 144 targets…
[11:48:03] Lt. Fallnya: (think im in shadow with H head unit…)
[11:48:22] Narrator: Then you see 144 targets with a backlog of tracks as long as your arm
[11:49:02] Lt. Fallnya: "Heh feels like that first battle at earth again… but this time in atmosphere…"
[11:49:19] Lt. Ishida nods
[11:49:35] GM: Sarah you ahve CVR
[11:49:42] 2Lt. Aylanea: (( I also asked about which kind of rifle the Alpha had. *sigh* And which kind it was, I assume not a red one. ))
[11:49:50] GM: doesnt fit quite right, but, you have it
[11:49:53] CWO Sarah: (( ty ))
[11:50:17] GM: All yur gunpods were functioanl and have been given to your new planes
[11:50:26] CWO Sarah: (( okay ))
[11:50:41] GM: Komillia, you have been issued the GAU-15 that Suki and Anji brought back from Point R
[11:51:08] 2 LT Komillia: ((I'll look that up.))
[11:51:24] Lt. Fallnya: "Hitman flight, stay as tight to gether as possible and cover each others backsides, its gona get harr up here. hold your MRM's for the crusers as long as possible"
[11:51:35] GM: 100 xp to the first person tha tcan tell me what pertinant infomation Damocles relayed to you.
[11:51:36] CWO Sarah: (( weapons and equipment on the website. ))
[11:52:17] 2 LT Komillia: ((3 fantoma class cruisers not heading from the east?))
[11:52:22] Lt. Fallnya: (3 Fantoma crusers approaching from the Badlands,
[11:52:24] GM: keep crusiing
[11:52:27] CWO Sarah: (( Big fleet in orbit, no space suppor ))
[11:52:30] GM: 100 for noth
[11:52:31] CWO Sarah: (( support ))
[11:52:39] GM: 100 to Komillia and Lurana
[11:53:36] CWO Sarah: (( *already mentioned that when it was asked if we remembered what they said earlier* ;p ))
[11:53:54] Lt. Jaron: (yea, thought it was something else)
[11:53:59] GM: 100 to Sarah as well, I see it
[11:54:19] GM: Just remember, I dont throw out info like that for nada lol
[11:55:18] Lt. Fallnya: (wich is why shes telling our betas to hold their medum range missiles for the crusers
[11:55:35] GM: Okie dokie! We will pick up "Uber Epic and Heroic Fighting for make benefit great nation of UEEF" next week |
1 Ir. Haery Sihombing/IP Pensyarah Fakulti Kejuruteraan Pembuatan Universiti Teknologi Malaysia Melaka Chapter 3 DIRECT COST Chapter 4 INDIRECT COSTS C O S T COST Cost is not a simple concept. It is important to distinguish between four different types - fixed,, variable, average and marginal. Monetary measure of resources given up to attain an objective (such as acquiring a good or delivering a service) COST A cost may be defined as a sacrifice or giving up of resources for a particular purpose. Costs are frequently measured by the monetary units that must be paid for goods and services. Cost and Cost Terminology Cost is a resource sacrificed or forgone to achieve a specific objective. An actual cost is the cost incurred (a historical cost) as distinguished from budgeted costs. A cost object is anything for which a separate measurement of costs is desired.
2 Cost and Cost Terminology Cost Classifications Cost Assignment is both Tracing Direct Costs Allocating Indirect Costs Cost Object Association with cost object Cost object is anything for which management wants to collect or accumulate costs Direct - traceable to a cost object Indirect - not conveniently or practically traceable to a cost object treated as overhead allocated Cost Classifications Categories Cost Object Costing System anything for which a separate measurement of costs is desired Direct Cost costs that are related to a particular cost object in an economically feasible (Cost-effective) manner Cost Pool a grouping of individual cost items Cost Allocation Base a factor that is the common denominator for systematically linking an indirect cost or group of indirect costs to a cost object
3 Cost Categories Association with cost object Reaction to changes in activity Variable Fixed Mixed Step Cost Allocation Same issue exists for merchandising firms Easier for merchandising, purchase price (major) shipping cost (minor) taxes (minor) Relevant Range normal operating range Classification of Costs This section concentrates on the big picture of how manufacturing costs are accumulated and classified. Cost Objective A cost objective or cost object is defined as anything for which a separate measurement of costs is desired. Examples include departments, products, activities, and territories. Accounts could be type of cost, to which product, department?
4 Categories of Manufacturing Costs All costs which are eventually allocated to products are classified as either 1. direct materials, 2. direct labour,, or 3. indirect manufacturing. Direct-Material Costs Direct-material costs include the acquisition costs of all materials that are physically identified as a part of the manufactured goods and that may be traced to the manufactured goods in an economically feasible way. Direct-Material Costs Direct Materials Materials that are clearly and easily identified with a particular product. Example: Steel used to manufacture the automobile. Direct-Labour Costs Direct-labour costs include the wages of all labour that can be traced specifically and exclusively to the manufactured goods in an economically feasible way.
5 Direct-Labour Costs Direct Labor Labor costs that are clearly traceable to, or readily identifiable with, the finished product. Example: Wages paid to an automobile assembly worker. Indirect Manufacturing Costs Indirect manufacturing costs or factory overhead include all costs associated with the manufacturing process that cannot be traced to the manufactured goods in an economically feasible way. Indirect Manufacturing Costs Factory Overhead All factory costs except direct material and direct labor. Factory costs that cannot be traced directly to specific units produced. Examples: Indirect labor maintenance Indirect material cleaning supplies Factory utility costs Supervisory costs Prime Costs, Conversion Costs, and Direct-Labour Costs Prime Costs 1. Direct Materials 2. Direct Labour Conversion 3. Factory Overhead Costs
6 Product Costs Product Costs Product costs are costs identified with goods produced or purchased for resale. Direct material Measurable part of a product Direct labor Labor used to manufacture a product or perform a service Overhead Indirect production cost Product Costs Period Costs Product costs are initially identified as part of the inventory on hand. These product costs (inventoriable costs) become expenses (in the form of cost of goods sold) only when the inventory is sold. First appear on the balance sheet in inventory accounts Transferred to the income statement when product is sold Period costs are costs that are deducted as expenses during the current period without going through an inventory stage
7 Period Costs Period Costs Selling and administrative costs Distribution costs Cost to warehouse, transport, and/or deliver a product or service Major impact on managerial decision making Appear on the income statement when incurred Expensed when incurred Classification By Function Period and Products Cost Period costs are expenses not charged to the product. Period Costs (Expenses) 2005 Income Statement Operating Expenses Selling Costs Costs incurred to obtain customer orders and to deliver finished goods to customers advertising and shipping. Administrative Costs Non-manufacturing costs of staff support and administrative functions accounting, data processing, personnel, research and development Costs Incurred Product Costs (Inventory) Inventory Sold in 2005 Inventory Not Sold in 2005 Cost of Goods Sold 2005 Balance Sheet Inventory Raw Materials Work in Process Finished Goods 2006 Income Statement Cost of Goods Sold
8 Product Cost - Direct Direct Material Conveniently and economically traced to cost object Direct Labor to manufacture a product or perform a service includes wages paid to direct labor employees, production bonuses, payroll taxes may include holiday and vacation pay, insurance, retirement benefits Product Cost - Indirect Overhead - indirect production costs Fringe benefits, if cannot be easily traced to product Overtime, if due to random scheduling Cost of quality Prevention costs Appraisal costs Failure costs Product Cost vs. Period Cost Product cost All costs incurred in getting product to saleable condition Three main elements: Raw Materials Labour Factory overheads Period cost All costs incurred for a period of time regardless of production Sometimes classified into: Marketing expenses General (administrative) expenses Financial expenses Direct Costs Direct costs can be identified specifically and exclusively with a given cost objective in an economically feasible way.
9 Indirect Costs Indirect costs cannot be identified specifically and exclusively with a given cost objective in an economically feasible way. Direct vs. Indirect Costs Direct Costs Major costs that can be directly attributable to the final product or service. Includes: Direct materials Direct labour Other: subcontractors, tender document preparation Indirect Costs All other costs that cannot be directly attributable to the final product or service. Includes Indirect materials: factory supplies, small items of material Indirect labour: admin, cleaning or security staff Factory overheads; rates, rent, insurance, telephone, stationery Classification by Traceability Fixed Cost vs. Variable Cost Direct costs Costs incurred for the benefit of one specific cost object. Examples: material and labor cost for a product. Indirect costs Costs incurred for the benefit of more than one cost object. Example: maintenance expenditures benefiting two or more departments. Fixed costs Those costs that in total will remain the same for a period of time and over a relevant range or output. Includes: Rent, rates, insurance, depreciation Variable costs Those costs that in total will tend to increase as output level increase. Includes: Direct Materials and Direct Labour
10 Overhead Cost Allocation Allocating Overhead Actual Cost System Assign indirect costs to one or more cost objects To determine full absorption cost (GAAP) To motivate management To compare alternative courses of action for planning, controlling, and decision making Allocation process should be rational and systematic Product Cost Direct Materials Direct Labor Overhead Cost Used Actual Actual Actual Allocating Overhead Actual Cost System Allocating Overhead Actual vs. Normal The Actual Cost System is not timely All costs must be known before calculating product cost Product Cost Direct Materials Direct Labor Actual Cost System Actual Actual Normal Cost System Actual Actual Overhead Actual Predetermined Overhead Rate
11 Classification By Behavior Classification By Behavior Cost behavior means how a cost will react to changes in the level of business activity. Total fixed costs do not change when activity changes. Total variable costs change in proportion to activity changes. Cost Cost Activity Activity Cost behavior means how a cost will react to changes in the level of business activity. Total fixed costs do not change when activity changes. Total variable costs change in proportion to activity changes. Product Cost Behavior Potential Multiple Cost Classifications Direct Material Variable Direct Labor Overhead Variable Variable, Fixed, or Mixed Cost Item Behavior Traceability Function Material Variable Direct Product Assembly Wages Variable Direct Product Advertising Fixed Indirect Period Production Manager's Salary Fixed Indirect Product Office Depreciation Fixed Indirect Period
12 Direct and Indirect Costs EXERCISE Direct Costs Example: Oak wood used in Mfg. of chairs. Indirect Costs Example: salary of the Plant night watchperson. COST OBJECT Example: 50 Oak Chairs produced in May. Direct and Indirect Costs Example Direct and Indirect Costs Example Direct Costs: Maintenance Department $40,000 Personnel Department $20,600 Assembly Department $75,000 Finishing Department $55,000 Assume that Maintenance Department costs are allocated equally among the production departments. How much is allocated to each department? Assembly Direct Costs $75,000 $20,000 Maintenance $40,000 Allocated Finishing Direct Costs $55,000 $20,000
13 Cost Behavior Patterns Example Cost Behavior Patterns Example Bicycles by the Sea buys a handlebar at $52 for each of its bicycles. What is the total handlebar cost when 1,000 bicycles are assembled? 1,000 units $52 = $52,000 What is the total handlebar cost when 3,500 bicycles are assembled? 3,500 units $52 = $182,000 Cost Behavior Patterns Example Cost Behavior Patterns Example Bicycles by the Sea incurred $94,500 in a given year for the leasing of its plant. This is an example of fixed costs with respect to the number of bicycles assembled. What is the leasing (fixed) cost per bicycle when Bicycles assembles 1,000 bicycles? $94,500 1,000 = $94.50 What is the leasing (fixed) cost per bicycle when Bicycles assembles 3,500 bicycles? $94,500 3,500 = $27
14 Cost Drivers Relevant Range Example The cost driver of variable costs is the level of activity or volume whose change causes the (variable) costs to change proportionately. The number of bicycles assembled is a cost driver of the cost of handlebars. Assume that fixed (leasing) costs are $94,500 for a year and that they remain the same for a certain volume range (1,000 to 5,000 bicycles). 1,000 to 5,000 bicycles is the relevant range. Relevant Range Example Relationships of Types of Costs Fixed Costs $94, Volume Variable Direct Indirect Fixed
15 Total Costs and Unit Costs Example Total Costs and Unit Costs Example What is the unit cost (leasing and handlebars) when Bicycles assembles 1,000 bicycles? Total fixed cost $94,500 + Total variable cost $52,000 = $146,500 $146,500 1,000 = $ Total Costs $146,500 $94,500 + $52x $94, Volume Use Unit Costs Cautiously Use Unit Costs Cautiously Assume that Bicycles management uses a unit cost of $ (leasing and wheels). Management is budgeting costs for different levels of production. What is their budgeted cost for an estimated production of 600 bicycles? 600 $ = $87,900 What is their budgeted cost for an estimated production of 3,500 bicycles? 3,500 $ = $512,750 What should the budgeted cost be for an estimated production of 600 bicycles?
16 Use Unit Costs Cautiously Use Unit Costs Cautiously Total fixed cost $ 94,500 Total variable cost ($52 600) 31,200 Total $125,700 $125, = $ Using a cost of $ per unit would underestimate actual total costs if output is below 1,000 units. What should the budgeted cost be for an estimated production of 3,500 bicycles? Total fixed cost $ 94,500 Total variable cost (52 3,500) 182,000 Total $276,500 $276,500 3,500 = $79.00 Merchandising Service Merchandising companies purchase and then sell tangible products without changing their basic form. Service companies provide services or intangible products to their customers. Labor is the most significant cost category.
17 Types of Inventory Types of Inventory Manufacturing-sector companies typically have one or more of the following three types of inventories: 1. Direct materials inventory 2. Work in process inventory (work in progress) 3. Finished goods inventory Merchandising-sector companies hold only one type of inventory the product in its original purchased form. Service-sector companies do not hold inventories of tangible products. Classification of Manufacturing Costs Inventoriable Costs Direct materials costs Inventoriable costs (assets) Direct manufacturing labor costs become cost of goods sold Indirect manufacturing costs after a sale takes place.
18 Period Costs Flow of Costs Example Period costs are all costs in the income statement other than cost of goods sold. Period costs are recorded as expenses of the accounting period in which they are incurred. Bicycles by the Sea had $50,000 of direct materials inventory at the beginning of the period. Purchases during the period amounted to $180,000 and ending inventory was $30,000. How much direct materials were used? $50,000 + $180,000 $30,000 = $200,000 Flow of Costs Example Flow of Costs Example Direct labor costs incurred were $105,500. Indirect manufacturing costs were $194,500. What are the total manufacturing costs incurred? Direct materials used $200,000 Direct labor 105,500 Indirect manufacturing costs 194,500 Total manufacturing costs $500,000 Assume that the work in process inventory at the beginning of the period was $30,000, and $35,000 at the end of the period. What is the cost of goods manufactured? Beginning work in process $ 30,000 Total manufacturing costs 500,000 Ending work in process 35,000 Cost of goods manufactured $495,000
19 Flow of Costs Example Flow of Costs Example Assume that the finished goods inventory at the beginning of the period was $10,000, and $15,000 at the end of the period. What is the cost of goods sold? Beginning finished goods $ 10,000 Cost of goods manufactured 495,000 Ending finished goods 15,000 Cost of goods sold $490,000 Work in Process Beg. Balance 30, ,000 Direct mtls. used 200,000 Direct labor 105,500 Indirect mfg. costs 194,500 Ending Balance 35,000 Flow of Costs Example Manufacturing Company Work in Process 495,000 Finished Goods 10, , ,000 15,000 Cost of Goods Sold 490,000 BALANCE SHEET Inventoriable Costs Materials Inventory Work in Process Inventory Finished Goods Inventory INCOME STATEMENT when sales occur Revenues deduct Cost of Goods Sold Equals Gross Margin deduct Period Costs Equals Operating Income
20 Merchandising Company Prime Costs all all direct mfg. costs BALANCE SHEET Inventoriable Costs Merchandise Purchases Inventory INCOME STATEMENT when sales occur Revenues deduct Cost of Goods Sold Equals Gross Margin deduct Direct Materials Direct Labor + = Prime Costs Period Costs Equals Operating Income Prime Costs Conversion Costs What are the prime costs for Bicycles by the Sea? Direct materials used $200,000 + Direct labor 105,500 = $305,000 Direct Labor Manufacturing + Overhead = Conversion Costs Indirect Labor Indirect Materials Other
21 Conversion Costs Measuring Costs Requires Judgment What are the conversion costs for Bicycles by the Sea? Direct labor $105,500 + Indirect manufacturing costs 194,500 = $300,000 Conversion cost = all mfg. cost except direct materials Manufacturing labor-cost classifications vary among companies. The following distinctions are generally found: Direct manufacturing labor Manufacturing overhead Measuring Costs Requires Judgment Measuring Costs Requires Judgment Manufacturing overhead Indirect labor Managers salaries Payroll fringe costs Forklift truck operators (internal handling of materials) Janitors Rework labor Overtime premium Idle time Overtime premium is usually considered part of overhead. Assume that a worker gets $18/hour for straight time and gets time and one-half for overtime.
22 Measuring Costs Requires Judgment Many Meanings of Product Cost How much is the overtime premium? $18 50% = $9 per overtime hour If this worker works 44 hours on a given week, how much are his gross earnings? Direct labor 44 hours $18 = $792 Overtime premium 4 hours $ 9 = 36 Total gross earnings $828 A product cost is the sum of the costs assigned to a product for a specific purpose. 1. Pricing and product emphasis decisions 2. Contracting with government agencies 3. Preparing financial statements for external reporting under generally accepted accounting principles Balance Sheet of a Manufacturer Balance Sheet of a Manufacturer Raw Materials Work in Process Finished Goods Raw Materials Work in Process Finished Goods Manufacturing Inventory Classifications Materials waiting to be processed. Partially complete products. Material to which some labor and/or overhead have been added. Completed products for sale.
23 Income Statement of a Manufacturer Income Statement of a Manufacturer Merchandiser Beginning Merchandise Inventory + Cost of Goods Purchased _ Ending Merchandise Inventory = The major difference Cost of Goods Sold Manufacturer Beginning Finished Goods Inventory + Cost of Goods Manufactured _ Ending Finished Goods Inventory = Cost of goods sold for manufacturers differs only slightly from cost of goods sold for merchandisers. Merchandising Company Cost of goods sold: Beg. merchandise inventory $ 14,200 + Purchases 234,150 = Goods available for sale $ 248,350 - Ending merchandise inventory (12,100) = Cost of goods sold $ 236,250 Manufacturing Company Cost of goods sold: Beg. finished goods inv. $ 14,200 + Cost of goods manufactured 234,150 = Goods available for sale $ 248,350 - Ending finished goods inventory (12,100) = Cost of goods sold $ 236,250 Income Statement of a Manufacturer Manufacturing costs are often combined as follows: Question What type of account is the manufacturing goods in process account? Direct Material Prime Cost Direct Labor Conversion Cost Manufacturing Overhead a. Income statement expense account. b. Balance sheet inventory account. c. Temporary clearing account for direct material and direct labor. d. Holding account for manufacturing overhead and direct labor.
24 Question The primary distinction between product and period costs is... a. Product costs are expensed in the period incurred. b. Product costs are directly traceable to product units. c. Product costs are inventoriable. d. Period costs are inventoriable. Flow of Manufacturing Activities Materials activity Raw Materials Beginning Inventory Raw Materials Purchases Raw Materials Ending Inventory Production activity Work in Process Beginning Inventory Direct Labor Factory Overhead Raw Materials Used Work in Process Ending Inventory Sales activity Finished Goods Beginning Inventory Cost of Goods Manufactured Finished Goods Ending Inventory Cost of Goods Sold Statement of Cost of Goods Manufactured Statement of Cost of Goods Manufactured Cost of all goods completed and transferred from work in process to finished goods during a reporting period. Direct Materials Used + Direct Labor + Factory Overhead = Total Manufacturing Costs + Beginning Work in Process Ending Work in Process = Cost of Goods Manufactured Let s take a look at Rocky Mountain Bikes Statement of Cost of Goods Manufactured.
25 Statement of Cost of Goods Manufactured ROCKY MOUNTAIN BIKES Statement of Cost of Goods Manufactured For Year Ended 31 December 2005 Direct materials used in production $ 85,500 Direct labor 60,000 Total factory overhead costs 30,000 Total manufacturing costs for the period $ 175,500 Add: Beginning work in process inventory 2,500 Total cost of work in process $ 178,000 Less: Ending work in process inventory 7,500 Cost of goods manufactured $ 170,500 Statement of Cost of Goods Manufactured Statement Computation of Cost of of Cost Goods of Direct Material Manufactured Used Beginning raw materials inventory $ Exh. 8, Add: Purchases of raw materials 86,500 Cost of raw materials available for use $ 94,500 Less: Ending raw materials inventory 9,000 ROCKY MOUNTAIN BIKES Cost of direct materials used in production $ 85,500 Statement of Cost of Goods Manufactured For Year Ended 31 December 2005 Direct materials used in production $ 85,500 Direct labor 60,000 Total factory overhead costs 30,000 Total manufacturing costs for the period $ 175,500 Add: Beginning work in process inventory 2,500 Total cost of work in process $ 178,000 Less: Ending work in process inventory 7,500 Cost of goods manufactured $ 170,500 Statement of Cost of Goods Manufactured Include all direct labor costs ROCKY incurred MOUNTAIN during BIKES the Statement of current Cost of period. Goods Manufactured For Year Ended 31 December 2005 Direct materials used in production $ 85,500 Direct labor 60,000 Total factory overhead costs 30,000 Total manufacturing costs for the period $ 175,500 Add: Beginning work in process inventory 2,500 Total cost of work in process $ 178,000 Less: Ending work in process inventory 7,500 Cost of goods manufactured $ 170,500 Statement of Cost of Goods Manufactured Computation of Total Manufacturing Overhead Indirect labor $ 9,000 Factory supervision 6,000 Factory utilities 2,600 Property taxes, factory building 1,900 Factory supplies usedrocky MOUNTAIN BIKES 600 Factory insurance expired 1,100 Statement of Cost of Goods Manufactured Depreciation, building and equipment 5,300 For Year Ended 31 December 2005 Other factory overhead 3,500 Direct Total factory materials overhead used costs in production $ 30,000 $ 85,500 Direct labor 60,000 Total factory overhead costs 30,000 Total manufacturing costs for the period $ 175,500 Add: Beginning work in process inventory 2,500 Total cost of work in process $ 178,000 Less: Ending work in process inventory 7,500 Cost of goods manufactured $ 170,500
26 Statement of Cost of Goods Manufactured Statement of Cost of Goods Manufactured Beginning work in ROCKY MOUNTAIN process BIKES inventory is carried over from the Statement of Cost of Goods Manufactured prior period. For Year Ended 31 December 2005 Direct materials used in production $ 85,500 Direct labor 60,000 Total factory overhead costs 30,000 Total manufacturing costs for the period $ 175,500 Add: Beginning work in process inventory 2,500 Total cost of work in process $ 178,000 Less: Ending work in process inventory 7,500 Cost of goods manufactured $ 170,500 Ending work in process inventory ROCKY contains MOUNTAIN the cost BIKES of unfinished Statement goods, of Cost and of is Goods reported Manufactured in the current For assets Year Ended section 31 December of the balance 2005 sheet. Direct materials used in production $ 85,500 Direct labor 60,000 Total factory overhead costs 30,000 Total manufacturing costs for the period $ 175,500 Add: Beginning work in process inventory 2,500 Total cost of work in process $ 178,000 Less: Ending work in process inventory 7,500 Cost of goods manufactured $ 170,500
27 Direct Costing CHAPTER 3 DIRECT COST Alternative method of costing Relatively new More useful costing method for management planning and decision making Also know as variable costing as most direct costs are variable with respect to level activities Main difference between absorption costing and direct costing is in the treatment of fixed manufacturing overhead Treatment of Fixed Manufacturing Overhead Fixed manufacturing costs is not treated as a product cost instead it is treated as a period cost That is, it is written off (expensed) in the period in which it is incurred rather than included as a cost when determining the cost of inventory If fixed manufacturing costs are excluded from the cost of inventory when using direct costing then end of an accounting period will be lower than the value is using absorption costing this will effect both the balance sheet and profits ST 10.1 Mts Manufactured 9000 Mts Sold 8600 Direct Materials $ 42, $ 4.70 Direct Labour $ 54, $ 6.00 Fixed Factory overhead $ 72, $ 8.00 Variable factory overhead $ 36, $ 4.00 $ 204, $ $ Manufacturing cost per metre Total costs $ 204, Number of Metres produced 9,000 Manufacturing cost per metre $ Product cost using absorption costing Direct material costs $ 42, Direct labour costs $ 54, Fixed Factory overhead $ 72, Variable factory costs $ 36, Product cost using absorption costing $ 204, Product cost using direct costing Direct material costs $ 42, Direct labour costs $ 54, Variable factory costs $ 36, Product cost using direct costing $ 132,300.00
28 ST 1 Value of closing inventory using absorption costing Metres Produced 9,000 less Metres sold 8,600 Closing Stock 400 Product cost/no of metres produced $ cost per metre Value of closing inventory using absorption costing $ 9, Value of closing inventory using direct costing Metres Produced 9,000 less Metres sold 8,600 Closing Stock 400 Product cost/no of metres produced $ cost per metre Value of closing inventory using direct costing $ 5, Sales 462,500 St.2 Less COGS Opening Inventory - Cost of production Direct materials Used 97,000 Direct labour used 64,020 Variable factory overhead incurred 54,320 Fixed factory overhead 106, ,040 Less Closing inventory 14, ,100 Gross profit 155,400 Less Operating expenses Marketing Expenses 45,325 Administrative Expense 92,500 Financial Expense 9, ,285 Net profit 8,115 Closing inventory Production Sale Closing inventory units units 900 units Costof production/# of production units $ per unit /19400 Closing inventory $ 14, Sales 462,500 Less Variable Costs ST.3 Opening Inventory - Cost of production Direct materials Used 97,000 Direct labour used 64,020 Variable factory overhead incurred 54, ,340 Less Closing inventory 9, ,350 Contribution Margin 257,150 Less Fixed Costs Manufacturing 106,700 Marketing Expenses 45,325 Administrative Expense 92,500 Financial Expense 9, ,985 Net profit 3,165 Closing inventory Production Sale Closing inventory units units 900 units Reconciliation of reported profits absorption and direct costing The difference in profits between the absorption and direct costing is caused by the amount of fixed overhead in the opening and closing inventories because they are excluded when using direct costing To reconcile profits using absorption costing to profits using direct costing you add back fixed costs in opening inventory using absorption costing and deduct fixed costs in closing inventory using absorption costing Costof production/# of production units $ per unit /19400 Closing inventory $ 9,990.00
29 Self test problem 4 Part a Absorption Costing August September Product Costs Directy materials Direct labour Variable manufacturing overheads Fixed manufacturing overheads no of litres produced cost per litre $ 0.79 $ 0.75 Self test problem 4 Part b Revenue Statement using the Absorption Costing Method August September Total Sales (sale price * # of litres sold) 117,600 $ 117, Less COGS 0 Opening Inventory 15,800 17,380 15,800 Cost of production 0 Direct materials Used Direct labour used Variable factory overhead incurred Fixed factory overhead ,800 99, ,300 Less Closing inventory 17,380 25,500 25,500 77,420 74, Gross profit 40,180 43, Less Operating expenses 0 Selling & Administrative expenses 30,000 30, Net profit 10,180 13, ,040 Closing inventory Opening Stock 20,000 22,000 add Production less Sale Closing inventory 22,000 ltrs 34,000 ltrs Cost of production/# of production units $ 0.79 $ / / Closing inventory $ 17, $ 25, Variance in Profit b/w August & September Self test problem 4 Part C Variance is due to the amount of fixed factory overhead component of cost of goods sold August September Fixed production costs Production in ltrs Fixed costs per litre $ 0.44 $ 0.40 Fixed costs in opening inventory opening inventory (ltrs)* fixed cost per ltr $ 8, ( Aug closing balance) Fixed costs incurred Less fixed cost in closing inventory closing inventory (ltrs)* fixed cost per ltr $ 43, difference b/w sept & Aug 3040 equals defference in net profit Revenue Statement using the Direct Costing Method ST 4 Part D August September Total Sales (sale price * # of litres sold) 117, , Less Variable Costs Opening Inventory (variable component only) 7,000 7, Cost of production Direct materials Used 5, Direct labour used 25, Variable factory overhead incurred 5, ,000 46, Less Closing inventory 7,700 11, , $ $ $ 34,300 34,300 68, Contribution margin 83,300 83, Less Fixed Costs 0 Manufacturing 44,000 44, Selling & Admin Exp 30,000 30, , Net profit 9,300 9, Closing inventory Opening Stock 20,000 22,000 add Production less Sale Closing inventory 22,000 ltrs 34,000 ltrs Cost of production/# of production units $ 0.35 $ /100000) (38500)/ Closing inventory $ 7, $ 11,900.00
30 ST4 part (e) Net profit using absorption costing 10,180 13,220 add Fixed costs in opening inventory using absorption costing $ 8, ,980 22,900 Less Fixed costs in closing inventory using absorption costing Net Profit using direct costing 9,300 9,300 Reporting variable marketing and administrative expense direct costing Revenue statement using direct costing: Is divided into 2 main areas Variable costs and fixed expenses Shows Variable non- manufacturing expenses and fixed non manufacturing expenses separately Shows the variable non- manufacturing expenses after the variable COGS (manufacturing exp) but before the net contribution margin line ST.5 $ $ $ Sales 274,543 Less Variable costs Cost of goods sold Inventory 1 July 26,485 Variable costs of production Diect materials 45,965 Direct labour 46,980 Variable factory overheads 22, , ,128 Less Inventory 30 June 25, ,468 Gross contribution Margin 158,075 less variable marketing expense 16,258 Net Contribution Margin 141,817 less Fixed Costs Factory Overhead 72,458 Marketing & Admin Exp 57, ,090 Net Profit 11,727 Revenue Statements with applied factory overheads There may be a variance between factory overheads applied and actual factory overheads incurred Any under-or or-over-applied overhead may be added or subtracted from the COGS In absorption costing the under-or or-over-applied overhead may include both variable and fixed elements However in direct costing under- or- over applied overhead will only include variable fixed overhead as the fixed overhead is not applied but written off as a period cost
31 ST 6 Part A ST 6 Part B Calculations Product cost using direct costing Direct material costs $ 2.00 Direct labour costs $ 1.50 Variable factory costs $ 1.00 Product cost using direct costing $ 4.50 Product cost using absorption costing Direct material costs $ 2.00 Direct labour costs $ 1.50 Variable factory overheads $ 1.00 Fixed factory overhead $ /30000(normal capacity) Product cost using absorption costing $ 7.00 Calculations July August Value of Opening & Closing inventory using absorption costing Opening Stock In Units 4, , Unit Cost $ $ 28, $ $ 42, , Closing Stock 6, Unit Cost $ $ 42, $ $ 21, Value of Opening & Closing inventory using direct costing Opening Stock In Units 4, , Unit Cost $ $ 18, $ $ 27, , Closing Stock 6, Unit Cost $ $ 27, $ $ 13, Under- or over-applied fixed factory overhead Budgeted & Actual fixed overhead $ 75, Fixed overhead applied (32000*$2.50) $ 80, *$2.50 Under- or (over) applied fixed overhead -$ 5, overapplied underapplied ST 6 Part B Revenue statements using absorption costing Sales (Sales in Quantity * $9) $ 270, Less COGC Beginning $7) $ 28, $ 42, Cost of production (units produced *$7) $ 224, $ 203, $ 252, $ 245, Less Closing inventory $ 42, $ 21, $ 210, $ 224, Add Under/ over applied Overhead -$ 5, $ 205, $ 226, Gross profit $ 65, $ 61, Less Marketing & Admin costs Variable ( Units sold *.3) $ 9, $ 9, Fixed $ 36, $ 36, $ 45, $ 45, Net profit $ 20, $ 15, Revenue statements using direct costing ST 6 Part C Sales (Sales in Quantity * $9) $ 270, Less COGC $4.5 $ 18, $ 27, Cost of production (units produced *$4.5) $ 144, $ 130, $ 162, $ 157, Less Closing inventory $ 27, $ 13, $ 135, $ 144, Gross contribution margin $ 135, $ 144, Less Variable Costs VariableMarketing Costs ( Units sold *.3) $ 9, $ 9, Contribution Margin $ 126, $ 134, Less Fixed Costs Manufacturing Costs $ 75, $ 75, Fixed marketing, admin & finance $ 36, $ 36, $ 111, $ 111, Net profit $ 15, $ 23,400.00
32 ST 6 Part D ST 6 Part d Net profit using absorption costing 20,000 15,900 add Fixed costs in opening inventory using absorption costing 4000 $ $2.5 30,000 30,900 Less Fixed costs in closing inventory using absorption costing 6000 $2.5 $ 15, $ 7, $2.5 Net Profit Using Direct costing $ 15, $ 23, Sales is only one component of Profits Profits is also affected by the difference between Quantity produced and Quantity sold. Under the absorption method the opening stocks of each accounting period contain a fixed manufacturing component carried forward from the previous period ST problem 7 (a) Fixed Factory Overhead Recovery Rate Budgeted Fixed factory Overhead $ 150,000 Budgeted Direct labour Hours $ per direct labour hour 7(b) Under- or over-applied combined factory overhead Actual fixed overhead $ 154, Actual Variable Overhead $ 48, Combined Factory Overhead $ 202, Combined overhead applied (15,000 direct labour hrs *$13/hr) $ 195, Under-applied fixed overhead $ 7, Variable Factory Overhead Recovery Rate Budgeted Variable factory Overhead $ 45,000 Budgeted Direct labour Hours $ per direct labour hour Combined Factory overhead rate Budgeted Fixed factory Overhead $ 150,000 Budgeted Variable factory Overhead $ 45,000 $ 195,000 Budgeted Direct labour Hours $ per direct labour hour Under- or over-applied Fixed factory overhead Actual fixed overhead $154, Fixed overhead applied (15,000 direct labour hrs *$10/hr) $150, Under-applied fixed overhead $4, Under- or over-applied Variable factory overhead Actual Variable overhead $48, Actual variable overhead applied (15,000 direct labour hrs *$3/hr) $45, Under-applied fixed overhead $3,000.00
33 7 c 7 c calculations cont Calculations Product cost using absorption costing Direct material costs $ 2.00 Direct labour costs $ 1.00 Fixed Factory overhead $ 5.00 $10/direct labour hour*15000hours/30000units Variable factory overheads $ 1.50 $3/direct labour hour*15000hours/30000units Product cost using absorption costing $ 9.50 Product cost using direct costing Direct material costs $ 2.00 Direct labour costs $ 1.00 Variable factory costs $ 1.50 Product cost using direct costing $ 4.50 WIP Finished goods Total Value of Opening & Closing inventory using absorption costing Opening Stock In Units - 4, Unit Cost $ 9.50 $ 9.50 $ - $ 38, $ 38, Closing Stock - 8, Unit Cost $ 9.50 $ 9.50 $ - $ 76, $ 76, Value of Opening & Closing inventory using direct costing Opening Stock In Units - 4, Unit Cost $ 4.50 $ 4.50 $ - $ 18, $ 18, Closing Stock - 8, Unit Cost $ 4.50 $ 4.50 $ - $ 36, $ 36, Revenue statements using absorption costing 7 c Sales $ 338, Less COGC Beginning $ 38, Cost of production (units produced *$9.5) $ 285, $ 323, Less Closing inventory $ 76, $ 247, Add Under/ over applied Overhead $ 7, $ 254, Gross profit $ 84, Less Marketing & Admin costs $ 51, ( ) Revenue statements using direct costing 7 d Sales $ 338, Less COGC Beinginning $ 18, Cost of production (units produced *$4.5) $ 135, $ 153, Less Closing inventory $ 36, $ 117, Add Underapplied Variable o/head $ $3, , Gross contribution margin $ 218, Less Variable Costs Variable Marketing & Admin Exp $ 33, Contribution Margin $ 184, Less Fixed Costs Manufacturing Costs (fixed factory o/head) $ 154, actual not budgeted Fixed- Marketing & admin $ 18, $ 172, Net profit $ 12, Net profit $ 32,620.00
34 Statement of Reconciliation 7 (e) Net profit using absorption costing 32,620 add Fixed costs in opening inventory using absorption costing 4000 $ ,620 Less Fixed costs in closing inventory using absorption costing $ $ 12, Alternatively Increase in inventory of 4000 units * fixed Factoy o/h $5 = ( ) = Job Costing & Direct Costing Direct costing can be: integrated with job, process or operation costing Used together with standard costing and activity based costing Fixed manufacturing overhead is debited to the general ledger to an account called fixed factory overhead Costing for indirect costs CHAPTER 4 INDIRECT COST On completion of this topic you should be able to Calculate the total cost of a cost unit using absorption costing methods Describe the problems associated with apportioning and absorbing indirect costs Independent study Progress test and practice question(s) as set
35 The Story So Far Absorption costing is a a method of costing that, in addition to direct costs, assigns a proportion or all the production overheads to the cost units. Costs are first allocated or apportioned to the cost centres, where they are absorbed into the cost unit using one or more absorption rates (Collis and Hussey, 2007, p. 241) The purpose of absorption costing is to find the total cost of a cost unit for valuing stock, planning and controlling production costs and determining the selling price The Story So Far The absorption approach is used by many firms and is a costing approach that considers all factory overhead (both variable and fixed) to be product (inventoriable)) costs that become an expense in the form of manufacturing cost of goods sold only as sales occur. Main stages in absorption costing Identify cost centres according to their function (eg production department) Collect indirect costs in cost centres on the basis of allocation or apportionment Determine overhead absorption rate (OAR) for each production cost centre (eg cost per machine hour) Charge indirect costs to products using OAR and a measure of the product s consumption of the cost centre s cost Overhead analysis The first stage in absorption costing is to prepare an overhead analysis which shows the allocation or apportionment of the production overheads to the production cost centres In the previous lecture we carried out an overhead analysis for Cotswold Coolers, which allocated and apportioned the total production overheads of 97,400 between the bottling department and the warehouse on what was considered to be a fair basis
36 Cotswold Coolers Overhead analysis Overhead Total Basis Bottling Warehouse Indirect materials 1,500 Allocated Indirect labour 45,000 No. of employees 30,000 15,000 Rent and rates 27,000 Area 9,000 18,000 Electricity 6,000 Area 4,000 2,000 Depreciation 8,000 Value of machinery 6,000 2,000 Supervision 21,000 No. of employees 14,000 7,000 Stock insurance 500 Value of stock Total 109,000 64,000 45,000 Production Overhead Absorption The next stage is to find a means of absorbing the production overheads for each cost centre into the cost units passing through them An overhead absorption rate (OAR) is a a means of attributing production overheads to a product or service (Collis and Hussey, 2007, p. 241) The three most commonly used OARs are The cost unit overhead absorption rate The direct labour hour overhead absorption rate The machine hour overhead absorption rate Exercise 1 Cost unit OAR The cost unit OAR is the simplest to use and the formula is Cost centre overheads Number of cost units passing through 104,000 units were produced during the period Production overheads were 64,000 for the bottling department and 45,000 for the warehouse Required Using the formula, calculate the cost unit OAR for each cost centre Formula Cost centre overhead Number of cost units Solution 1 Cost unit OAR Bottling 64, ,000 Warehouse 45, ,000 Cost Ros has unit decided OAR to use the 0.62 cost unit per OAR to 0.43 absorb per the warehouse production overheads into the cost of a bottle unit unit of water (the cost unit)
37 Direct Labour Hour OAR An alternative is the labour hour OAR Cost centre overhead costs Total direct labour hours Cotswold Coolers cannot use this OAR because the firm does not use a pay scheme that is linked directly to the product The labour hour OAR is typically used to absorb production overheads where the firm operates a time-based pay scheme and the level of direct labour hours in production cost centre is high Exercise 2 Machine hour OAR An alternative is the machine hour OAR Cost centre overhead costs Total machine hours 104,000 units were produced during the period Production overheads were 64,000 for the bottling department and 45,000 for the warehouse Total machine hours were 16,000 for the bottling department and 2,000 for the warehouse Required Using the formula, calculate the machine hour OAR for each cost centre Solution 2 Machine hour overhead absorption rate Formula Cost centre overhead Total machine hours Machine hour OAR Bottling 64,000 16, per m/hour Warehouse 45,000 2, per m/hour To reflect the high number of machine hours in the bottling department, Ros has decided to use the machine hour OAR for absorbing the production overheads into the cost of a bottle of water (the cost unit) Exercise 3 Production cost per unit Direct costs per unit are Mineral water 0.30; bottle, lid and label 0.75 The OAR in the bottling department will be 4.00 per machine hour (from Exercise 2) The OAR in the warehouse will be 0.43 per unit (from Exercise 1) Required Complete the production cost statement and calculate the production cost per unit
38 Pro forma Cotswold Coolers Production cost statement (1 unit) Direct materials Mineral water 0.30 Bottle, lid and label 0.75 Prime cost? Production overheads Bottling dept? Warehouse?? Production cost? Solution 3 Cotswold Coolers Production cost statement (1 unit) Direct materials Mineral water 0.30 Bottle, lid and label 0.75 Prime cost 1.05 Production overheads Bottling dept (0.15 machine hour x 4.00) 0.60 Warehouse (cost unit OAR) Production cost 2.08 Exercise 4 Apportioning non-production overheads The final step is to apportion the non-production overheads (eg administration, selling and distribution, research and development costs) A simple method is to add a percentage based on the following formula Non-production overheads x 100 Production cost Required Using the formula, calculate the percentage if non-production overheads are 43,250 and the production cost is 216,320 Solution 4 Apportioning non-production overheads Non-production overheads are 43,250 and the production cost is 216,320 Non-production overheads x 100 Production cost = 43,250 x ,320 = 20% of production cost If we also add a gross profit mark up of 50% of the production cost, we can calculate the selling price
39 Cotswold Coolers Total cost (1 unit) Direct materials Mineral water 0.30 Bottle, lid and label 0.75 Prime cost 1.05 Production overheads Bottling dept (0.15 machine hour x 4.00) 0.60 Warehouse (cost unit OAR) Production cost 2.08 Non-production overheads ( 2.08 x 20%) 0.42 Total cost 2.50 Profit ( 2.08 x 50%) 1.04 Selling price 3.54 Using predetermined absorption rates Normally predetermined overhead absorption rates (based on estimates) are used because the actual figures are not available until the end of the period Where the predetermined overhead that has been absorbed is higher than the actual overhead, the variance is known as overabsorption and this reduces expenses in the profit and loss account Where the predetermined overhead that has been absorbed is lower than the actual overhead, the variance is known as underabsorption and this increases expenses in the profit and loss account INCOME STATEMENT The income statement or profit and loss statement summarizes the firm s s revenues and expenses over a period of time (a month, a quarter, or a year) The income statement is used to evaluate revenue and expenses that occur in the interval between consecutive balance sheet statements. Revenues Expenses = Net Profit (Loss) Here is an example of an Income Statement Operating Revenues and Expenses Operating Revenues Sales (minus) returns and allowances Total Operating Revenues Operating Expenses Cost of Goods and Services Sold Materials Indirect Costs Selling and promotion Depreciation General and administrative Lease payments Total operating expense Total operating income Labor $28, , ,250 10,780
40 SOME FINANCIAL RATIOS DERIVED FROM INCOME STATEMENT Total operating income Non-operating operating Revenues and Expenses Interest (minus) Interest payments receipts Total Non-operating operating income Net Income Before Taxes Income Taxes (35%) Rents Net Profit (loss) for year ,780 $ ,240 3,930 $7,310 Interest Coverage = Total Income / Interest payments (28,610-17,250) /120 = 94.7 Net profit ratio = Net profit / Net sales revenue 7,310 / 28,030 = = 26.1% TRADITIONAL COST ACCOUNTING Direct Costs: Direct material : all material that is used in manufacturing a product Direct labor: wages of the direct touch labor needed to build one unit Indirect Costs: also known as overhead Shipping and receiving Quality control Engineering Rent, Insurance, etc All other expenses which are not direct labor or direct material ABSORPTION COSTING To allocate indirect cost (OH) to different products accountants use quantities such as direct-labor hours, direct-labor cost, material cost, or total direct cost as the metric. For example, if direct labor-hours is the metric to use, then overhead will be allocated based on overhead dollar per direct-labor hour. Then each product will absorb (or be allocated) overhead costs, based on the direct labor hours it consumes.
41 ABSORPTION COSTING Metric, i Unit allocation rate Rate, R i Unit allocation of OH cost Example Total Overhead is $850,000 Standard Premium DL hours DL cost DM cost Total direct cost $OH/total DL hours $OH/total DL cost $OH/total DM cost $OH/total Direct cost R i *DL hours per unit R i *DL cost per unit R i *DM cost per unit R i *total direct cost per unit Number of Units per year Labor cost (each) Materials cost (each) 750 $400 $ $500 $900 Example Total Overhead is $850,000 Standard Premium Total Labor cost Example Total Overhead is $850,000 Standard $300,000 Premium $200,000 Total $500,000 Number of Units per year Labor cost (each) 750 $ $500 Overhead/labor Allocation by labor 1.70 $510, $340,000 $850,000 Materials cost (each) $550 $900 Material cost $412,000 $360,000 Total labor cost Total materials cost $300,000 $412,500 $200,000 $360,000 $500,000 $772,500 Overhead material Allocation by material $ $453,884 $ $396,117 $850,000
42 Unit cost based on $DL allocation of OH Unit cost based on $DM allocation of OH Standard Premium Standard Premium DM DM DL DL OH (DL cost.) Unit cost 400*1.70 = *1.70 = OH (DM cost.) Unit cost 550*1.100 = *1.100 = CONCLUSIONS Direct costs are allocated to the cost unit Production overheads are allocated or apportioned to the cost centres on a fair basis and absorbed into the cost unit using an appropriate OAR Non-production overheads can be absorbed into the cost unit by adding a percentage based on the proportion of non-production overheads to the total production cost But a limitation of absorption costing is that it is based on arbitrary decisions about the basis for apportionment and absorption of overheads CHAPTER 5 MARGINAL COST Using direct (marginal) costing for decision making
43 What is Direct Costing? The Direct Costing method (Marginal costing) is an inventory valuation / costing model that includes only the variable manufacturing costs: direct materials (those materials that become an integral part of a finished product and can be conveniently traced into it) direct labor (those factory labor costs that can be easily traced to individual units of product. Also called touch labor) - only variable manufacturing overhead in the cost of a unit of product. The entire amount of fixed costs are expenses in the year incurred. The Principles of Marginal Costing 1. For any given period of time, fixed costs will be the same, for any volume of sales and production (provided that the level of activity is within the relevant range ). Therefore, selling an extra item of product or service: Revenue will increase by the sales value of the item sold Costs will increase by the variable cost per unit Profit will increase by the amount of contribution earned from the extra item 2. The volume of sales falls by one item the profit will fall by the amount of contribution earned from the item. The principles of marginal costing 3. Profit measurement should be based on an analysis of total contribution. Since fixed costs relate to a period of time, and do not change with increases or decreases in sales volume, it is misleading to charge units of sale with a share of fixed costs 4. When a unit of product is made, the extra costs incurred in its manufacture are the variable production costs. Fixed costs are unaffected, and no extra fixed costs are incurred when output is increased Features of Marginal costing 1. Cost Classification The marginal costing technique makes a sharp distinction between variable costs and fixed costs. It is the variable cost on the basis of which production and sales policies are designed by a firm following the marginal costing technique
44 Features of Marginal costing 2. Stock/Inventory Valuation Under marginal costing, inventory/stock for profit measurement is valued at marginal cost. It is in sharp contrast to the total unit cost under absorption costing method Features of Marginal costing 3. Marginal Contribution Marginal costing technique makes use of marginal contribution for marking various decisions. Marginal contribution is the difference between sales and marginal cost. It forms the basis for judging the profitability of different products or departments Cost-Volume Volume-Profit Analysis Systematic method of examining the relationship between changes in activity and changes in total sales revenue, expenses and net profit CVP analysis is subject to a number of underlying assumptions and limitations The objective of CVP analysis is to establish what will happen to the financial results if a specified level of activity or volume fluctuates CVP Analysis Assumptions All other variables remain constant A single product or constant sales mix Total costs and total revenue are linear functions of output The analysis applies to the relevant range only Costs can be accurately divided into their fixed and variable elements The analysis applies only to a short-time time horizon Complexity-related fixed costs do not change
45 CVP Diagram A Mathematical Approach to CVP Analysis NP=Px Px-(a+bx), NP net profit x units sold P selling price b unit variable cost a total fixed costs Break-Even and Related Formulas TR Profit = FC + VC Contribution = TR VC Profit = Contribution FC Break-even even (units) = FC/Contribution per unit Break-even even (sales revenue) =FC/PV ratio, where PV (profit - volume) ratio = Contribution/Selling price Margin of Safety Indicates by how much sales may decrease before a loss occurs Margin of safety (units)= Profit/Contribution per unit Margin of safety (sales revenue) = Profit/PV ratio
46 Range of Goods Planning (1) A B C Increases in Activity Level (unlimited) per unit total per unit total per unit total A B C Price(sales sales) VC FC (allocated( allocated) Costs per unit increment total per unit total per unit total Profit Price(sales sales) Contribution A B C VC per unit 1000 total per unit 0 total per unit 1500 total FC (allocated( allocated) Price(sales sales) Costs VC FC (allocated( allocated) Profit Costs Profit Contribution Contribution Increases in Activity Level (limited) Pricing A 1000 B 1200 C 1500 Price is 250 $ per unit choice 1 better quality (higher price,higher FC higher FC) choice 2 lower price Price(sales sales) VC per unit total per unit total per unit total FC (allocated( allocated) per unit total per unit total Costs Price(sales sales) Profit VC Contribution Number of labour hours used Contribution per hour , , , FC (allocated( allocated) Costs Rank max hours Profit Demand in units Contribution Total labour demand BEP Capacity
47 Price VC FC (allocated( allocated) Costs Profit Price(sales sales) VC FC (allocated( allocated) Costs Profit To ProduceP or to BuyB per unit per unit Produce Produce total total x x per unit x x Buy (unlimited) per unit Buy (unlimited) x x total x x total Advantages Direct costing is simple to understand It provides more useful information for decision-making Direct costing removes from profit the effect of inventory changes Is effective in internal reporting for frequent profit statements and measurement of managerial performance Direct costing avoids fixed overheads being capitalized in unsaleable stocks The effects of alternative sales or production policies can be easier assessed thus the decisions yield the maximum return to business By concentration on maintaining a uniform and consistent marginal cost practical cost control is greatly facilitated Disadvantages The separation of costs into fixed and variable is difficult and sometimes gives misleading results Direct costing underestimates the importance of fixed costs Full costing systems also apply overhead under normal operating volume and this shows that no advantage is gained by direct costing Under direct costing, stocks and work in progress are understated. The exclusion of fixed costs from inventories affect profit, and true and fair view of financial affairs of an organization may not be clearly transparent Volume variance in standard costing also discloses the effect of fluctuating output on fixed overhead. Marginal cost data becomes unrealistic in case of highly fluctuating levels of production, e.g., in case of seasonal factories. Disadvantages (2) Application of fixed overhead depends on estimates and there may be under or over absorption of the same Control affected by means of budgetary control is also accepted by many. In order to know the net profit, we should not be satisfied with contribution and hence, fixed overhead is also a valuable item. A system which ignores fixed costs is less effective since a major portion of fixed cost is not taken care of under marginal costing In practice, sales price, fixed cost and variable cost per unit may vary. Thus, the assumptions underlying the theory of marginal costing sometimes becomes unrealistic. For long term profit planning, absorption costing is the only answer
48 Direct vs. Absorption (full) costing Direct vs. Absorption (full) costing Direct costing are regarded as period costs (written are allocated to the products (included in inventory valuation) as a lump sum to the profit and loss account) Variable manufacturing costs are assigned to the products are period costs are added to the variable manufacturing cost of sales to determine total manufacturing costs Fixed manufactured overheads Non-manufacturing overheads Absorption costing are assigned to the products are period costs Fixed manufacturing costs are assigned to the products Direct costing Profit is a function of sales Are recommended where indirect costs are a low proportion of an organization s s total costs is used for managerial decision-making and control used mainly for internal purposes Absorption costing Profit is a function of both sales and production Assigns indirect costs to cost objects is widely used for cost control purpose esp. in the long run consistent for external reporting THE END
49 STANDARD COSTS Ir. Haery Sihombing/IP Pensyarah Fakulti Kejuruteraan Pembuatan Universiti Teknologi Malaysia Melaka CHAPTER 6 STANDARD COST WHAT ARE STANDARD COST? Standard costs are the expected costs of manufacturing the product. WHAT ARE STANDARD COST SYSTEM? 1. A standard costs system is a method of setting cost targets and evaluating performance 2. Target or expected costs are set based on a variety of criteria, and actual performance relative to expected targets is measured STANDARD COSTS WHAT ARE STANDARD COST SYSTEM? 3. Significant difference between expectations and actual results are investigated 4. Consistent with the themes developed throughout this class, standard cost systems are means of helping managers with decision making and control STANDARD COSTS Standard Direct Labor costs = Expected Wage Rate X Expected Number of Hours Standard Direct Material Cost = Expected Cost of Raw Materials X Expected Number of Units of Raw Material Standard Overhead Costs = Expected Fixed OH + Expected Variable Overhead X Expected Number of Units to be Produced
50 TARGET COSTING 1. The market place determines the selling price of the future product 2. The company determines the profit margin they desire to achieve on his product 3. The difference between the selling price and the profit margin is the target cost TARGET COSTING WHY USE A STANDARD COST SYSTEM 1. Standard are important for decision making How we produce our product How we price our product Contract billing 2. Monitor Manufacturing Large variances may indicative of problems in production 3. Performance Measurement Deviations between actual and standards are often used as measure of a manager s s performance Who sets the standard? TARGET COSTING HOW DO WE SET THE STANDARDS? Theoretically the standard should be expected cost of producing the product General practices: Prior years performance Expected future performance under normal operating Optimistic (Motivator) TARGET COSTING Important considerations in setting standard 1. Why are senior managers using standard Pricing Performance measurement Production decisions 2. What happens if managers fail to meet the standards? 3. Standard are supposed to represent the opportunity cost of production
51 Example : 1 Example : 1 Example : 1 Example : 1
52 Example : 1 (Question) Direct Labor Wage Variance What do we do with the raw materials price variance? Who do we hold responsible? What do we do with the raw materials quantity variance? Who do we hold responsible? Direct Labor Efficiency Variances STANDARD COSTS BUDGETS are TOTAL amounts A STANDARD COST is a PER UNIT BUDGET amount
53 Ideal Vs. Normal Standards Analysis of Direct Material Variances An Ideal Standard is the theoretical best-cause which assumes 100% efficiency A Normal Standard should represent a level of efficiency that is attainable under normal operating conditions The setting of the standard is a management judgment call and must reflect expected and acceptable inefficiencies Analysis of Direct Material Variances Analysis of Direct Material Variances TOTAL VARIANCE for Direct Materials must be analyzed in terms of Quantity Variance Price Variance
54 Analysis of Direct Material Variances Analysis of Direct Labor Variances The Analysis of the Labor Variance works the same mechanically as the Analysis of Direct Materials Variances Direct Materials Quantity Price Direct Labor # of Hours Hourly Cost Analysis of Overhead Variances Analysis of Overhead Variances
55 Analysis of Overhead Variances Analysis of Overhead Variances Analysis of Overhead Variances Example : 2 Manufacturing Product Costs Direct costs-- --can be traced to units produced direct direct labor direct direct materials Overhead-- --can t t be traced to units indirect indirect labor-- --e.g., janitorial, supervisory indirect indirect materials-- --e.g., miscellaneous supplies other--e.g., depreciation, utilities, rent allocated to units based on drivers
Chapter 3 COST CONCEPT AND DESIGN ECONOMICS C O S T Ir. Haery Sihombing/IP Pensyarah Fakulti Kejuruteraan Pembuatan Universiti Teknologi Malaysia Melaka COST Cost is not a simple concept. It is important
An Introduction to Cost terms and Purposes Session 2 Learning Objectives Define and illustrate a cost object Distinguish between direct costs and indirect costs Explain variable costs and fixed costs Interpret
CHAPTER 2 An Introduction to Cost Terms and Purposes Overview This chapter introduces the basic terminology of cost accounting. Communication among managers and management accountants is greatly facilitated
Basic Cost Management Concepts M. En C. Eduardo Bustos Farías as 1 Objectives 1. Explain what is meant by the word "cost." 2. Distinguish among product costs, period costs,, and expenses. 3. Describe the
Chapter 2 Cost Concepts and Behavior rue/false Questions F 1. he cost of an item is the sacrifice made to acquire it. Answer: rue Difficulty: Simple Learning Objective: 1 F 2. A cost can either be an asset
Solution Manual for Cost Accounting Foundations and Evolutions 8th Edition by Kinney and Raiborn Link full download of Solution Manual: https://digitalcontentmarket.org/download/solution-manual-for-costaccounting-foundations-and-evolutions-8th-edition-by-kinney-and-raiborn/
Chapter 02 Cost Concepts and Behavior True / False Questions 1. The cost of an item is the sacrifice of resources made to acquire it. True False 2. An expense is a cost charged against revenue in an accounting
Chapter 02 - Cost Concepts and Cost Allocation Student: 1. Product costs for a manufacturing company consist of direct materials, direct labor, and overhead. 2. Period cost and product cost are synonymous
Chapter 02 - Cost Concepts and Cost Allocation Student: 1. Product costs for a manufacturing company consist of direct materials, direct labor, and overhead. 2. Period cost and product cost are synonymous
TM 3-1 AGENDA: JOB-ORDER COSTING A. The documents in a job-order costing system. 1. Materials requisition form. 2. Direct labor time ticket. 3. Job cost sheet. B. Applying overhead using a predetermined
Test Bank For Cost Accounting A Managerial Emphasis Fifth Canadian 5th Edition By Horngren Foster Datar And Gowing Link full download: https://digitalcontentmarket.org/download/test-bank-for-cost-accounting-amanagerial-emphasis-fifth-canadian-5th-edition-by-horngren-foster-datar-andgowing/
1 Chapter 2 Introduction to Cost Terms and Purposes Cost A cost is the value of economic resources (e.g., money) sacrificed or used up to achieve a particular objective (e.g., produce a product or perform
Part 1 Study Unit 6 Cost Allocation Techniques Jim Clemons, CMA Absorption versus Variable Costing You need to be able to answer the following: Under absorption costing, which cost are considered product
18-11 C 2 Cost TYPES OF COST CLASSIFICATIONS CLASSIFICATION BY BEHAVIOR Cost Activity Activity Cost Cost behavior refers to how a cost will react to changes in the level of business activity. Total fixed
Multiple Choice Questions: 1- The Value Chain a- Involves external companies as well as internal activities. b- Is the sequence of business functions in which customer usefulness is added to products or
CHAPTER 2 Job Order Costing ASSIGNMENT CLASSIFICATION TABLE Study Objectives Questions Brief Exercises Do It! Exercises A Problems B Problems 1. Explain the characteristics and purposes of cost accounting.
CHAPTER ONE: OVEVIEW OF MANAGERIAL ACCOUNTING The Basic Objectives of Accounting Basic objective of accounting is to provide stakeholders with useful information about a business enterprise in order to
CHAPTER 2 An Introduction to Cost Terms and Purposes TRANSITION NOTES This chapter has been rewritten to place more emphasis on the role of managerial decisions. Exhibits have been changed so the students
Chapter 20 HUM 211: Financial & Managerial Accounting Lecture 08: Managerial Accounting (Concepts & Principles) Masud Jahan Department of Science and Humanities Military Institute of Science and Technology
1. It is beneficial to assign indirect costs to cost objects. True 2. Price must be greater than cost in order for the firm to generate revenue. False 3. Accumulating costs is the way that costs are measured
An Introduction to Cost Terms and Purposes Dr. Osama Al Meanazel Lecture 5 Other Cost Concepts Cost driver a variable that causally affects costs over a given time span Relevant range the band of normal
Chapter 2 Job Order Costing ANSWERS TO QUESTIONS 1. The difference between job order costing and process costing relates to the type of product or service the company provides, and whether that product
Chapter 2--Product Costing: Manufacturing Processes, Cost Terminology, and Cost Flows Student: 1. Which of the following types of organizations is most likely to have a raw materials inventory account?
Managerial Accounting: Making Decisions and Motivating Performance (Datar/Rajan) Chapter 2 An Introduction to Cost Terms and Purposes Learning Objective 2-1 1) The cost incurred is: A) actual costs. B)
CHAPTER 2 Job Order Costing ASSIGNMENT CLASSIFICATION TABLE Brief A B Study Objectives Questions Exercises Do It! Exercises Problems Problems 1. Explain the characteristics 1, 2, 3, 4 and purposes of cost
Solutions Manual For Cost Accounting A Managerial Emphasis 15th Edition by Horngren CHAPTER 2 AN INTRODUCTION TO COST TERMS AND PURPOSES Link download full: https://getbooksolutions.com/download/solutions-manualfor-cost-accounting-a-managerial-emphasis-15th-edition-by-horngren/
Solutions Manual for Cost Accounting A Managerial Emphasis 14th edition by Horngren Datar and Rajan Link download Solutions Manual for Cost Accounting A Managerial Emphasis 14th edition by Horngren Datar
Chapter 3--Product Costing: Manufacturing Processes, Cost Terminology, and Cost Flows Student: 1. Which of the following types of organizations is most likely to have a raw materials inventory account?
CHAPTER 20 JOB ORDER COST ACCOUNTING SUMMARY OF QUESTIONS BY STUDY OBJECTIVES AND BLOOM S TAXONOMY Item SO BT Item SO BT Item SO BT Item SO BT Item SO BT True-False Statements 1. 1 K 8. 2 K 15. 2 K 22.
Cost Accounting: A Managerial Emphasis, 16e, Global Edition (Horngren) Chapter 2 An Introduction to Cost Terms and Purposes 2.1 Objective 2.1 1) Which of the following would be considered an actual cost
1 Inventory Cost Accounting Tips and Tricks Nick Bergamo, Senior Manager Linda Pei, Senior Manager 2 Disclaimer The material appearing in this presentation is for informational purposes only and is not
17-0 17-1 Chapter 17 Unit costs for decision-making Objectives 17-2 Once you have completed this part of the topic, you should be able to: 1. Explain the importance of unit costs. 2. Identify the costs
Cost Accounting, 15e (Horngren/Datar/Rajan) Chapter 2 An Introduction to Cost Terms and Purposes Objective 2.1 1) An actual cost is. A) is the cost incurred B) is a predicted or forecasted cost C) is anything
REVISION: MANUFACTURING 12 SEPTEMBER 2013 Lesson Description In this lesson we: Revise ledger accounts and the production cost statement Key Concepts Ledger Accounts for Manufacturing Differences between
Exercise 3-12 1. The overhead applied to Ms. Miyami s account would be computed as follows: 2002 2001 Estimated overhead cost (a)... $144,000 $144,000 Estimated professional staff hours (b)... 2,250 2,400
1. Cost accounting involves the measuring, recording, and reporting of: A. product costs. B. future costs. C. manufacturing processes. D. managerial accounting decisions. 2. In accumulating raw materials
Chapter 2--Cost Terminology and Cost Behaviors Student: 1. A cost object is anything for which management wants to collect or accumulate costs. 2. A production plant could be a cost object. 3. A specific
Lecture 2: Flow of resource costs Cost Object: anything for which a separate measurement of costs is required, e.g. products, services, customers, projects, processes, segments of the value chain, divisions/departments,
Intermediate Management Accounting Chartered Professional Accountants of Canada, CPA Canada, CPA are trademarks and/or certification marks of the Chartered Professional Accountants of Canada. 2019, Chartered
Review II NUMBER 1. Which of the following is a characteristic of managerial accounting? a. It is used primarily by external users. b. It often lacks flexibility. c. It is often future-oriented. d. The
2 Cost Concepts and Behavior Solutions to Review Questions 2-1. Cost is a more general term that refers to a sacrifice of resources and may be either an opportunity cost or an outlay cost. An expense is
PROCESS OPERATIONS ACCT 102 - Professor Johnson Lecture Notes Chapter 16: PROCESS COSTING AND ANALYSIS In Chapter 15, we studied the job order cost accounting system used when a company manufactures products
GLOSSARY A Absorption (or full) costing. A method of accounting for manufacturing costs that charges both fixed and variable costs to the product. Account analysis method. See Observation method. Accounting
241 5. What are the advantages of Cost Accounting? 6. What are the essentials of good costing system? 7. What are the various elements of costs? 8. Write short notes on: a) Cost centres b) Cost units c)
CHAPTER 2 AN INTRODUCTION TO COST TERMS AND PURPOSES 2-1 Relevant cost information is cost information that will change a decision. It is needed to identify and remedy different cost-management problems.
JOB ORDER COSTING Terms Cost Accounting Process Cost System Job Order Cost System LO 1: Cost Systems Job-Order Costing Used for custom or unique items Each job is accounted for separately Measures cost
Fundamentals of Cost Accounting 4th Edition Lanen Test Bank Full Download: http://testbanklive.com/download/fundamentals-of-cost-accounting-4th-edition-lanen-test-bank/ Chapter 02 Cost Concepts and Behavior
Test Bank For Cost Accounting Foundations and Evolutions 9th Edition by Kinney and Raiborn Link full download: http://testbankcollection.com/download/test-bank-for-cost-accounting-foundationsand-evolutions-9th-edition-by-kinney/
Chapter 1 Question Review 1. Which of the following is not a characteristic of managerial accounting: a) Emphasizes decisions affecting the future b) Mandatory for external reports c) Need not follow GAAP
7. Understand the difference between inventoriable vs non inventoriable costs given the type of organization (merchandising vs manufacturing vs service) Inventoriable cost: cost included in the value of
Chapter 2 An Introduction to Cost Terms and Purposes Copyright 2003 Pearson Education Canada Inc. Slide 2-15 Costs and Cost Objects Cost a resource sacrificed or foregone to achieve a specific objective
DEFINITIONS AND CONCEPTS ** CONCEPTS AND DEFINITIONS IN THIS MODULE APPEAR IN VARIOUS CHAPTERS ** Key Terms and Concepts to Know Major Management Activities Planning - formulating long and short-term plans
Test bank for Fundamentals of Cost Accounting 4th by William N. Lanen, Shannon Anderson, Michael W Maher Link full download : http://testbankcollection.com/download/test-bankfor-fundamentals-of-cost-accounting-4th-by-lanenanderson-maher/
Costing is a very interesting area because there are many different ways to come up with cost for something. But these principles are generally applicable across the board. Because at any point once you
QUESTION ONE SECTION I Wangu Manufacturing Company Ltd. is located at the industrial area in Nairobi. The company uses four different machine groups, A, B, C and D in its manufacturing process. The overhead
Managerial Accounting Prof. Dr. Varadraj Bapat School of Management Indian Institute of Technology, Bombay Module - 9 Lecture - 20 Accounting for Costs Dear students, in our last session we have started
FINANCIAL STATEMENTS Key Topics to Know Cost of good sold statement is prepared from the finished goods inventory account. Cost of goods sold statement has the same format as in financial accounting. Cost
A. Chapter 14 (Managerial Accounting). 1. Purposes and Principles. (Page 632) REVIEW FOR EXAM NO. 1, ACCT-2302 (SAC) (Chapters 14-16) a. Provides economic/financial information (both historical and estimated)
Chapter 15 Distinguish management accounting from financial accounting Management Management s Accountability to Stakeholders Stakeholders Owners Government Provide Management is accountable for: Operating
Introduction to Cost Accounting Samir K Mahajan MEANING OF COST, COSTING AND COS ACCOUNTING Cost is amount of resources given up in exchange for some goods and services. The resources given up are expressed
CHAPTER 2 THEORITICAL FOUNDATION 2.1. Definition of Cost As stated by Matz and Usry (1976:41), cost must be justified on relevant facts, competently observed and considerably measured to facilitate management
Introduction to Cost Accounting Samir K Mahajan MEANING OF COST, COSTING AND COS ACCOUNTING Cost is amount of resources given up in exchange for some goods and services. The resources given up are expressed
9-1 9-2 Chapter MANAGEMENT 9 ACCOUNTING A BUSINESS PARTNER To explain the three principles guiding the design of management accounting systems. LO1 Management Accounting: Basic Framework 9-3 Management
Introduction to Cost Accounting Samir K Mahajan MEANING OF COST, COSTING AND COS ACCOUNTING Cost is amount of resources given up in exchange for some goods and services. The resources given up are money
Multiple Choice Questions 16. Indirect labor is a part of: A) Prime cost. B) Conversion cost. C) Period cost. D) Nonmanufacturing cost. Answer: B Level: Medium LO: 1,2 Source: CPA, adapted 17. The cost
2 Cost Concepts and Behavior Solutions to Review Questions 2-1. Cost is a more general term that refers to a sacrifice of resources and may be either an opportunity cost or an outlay cost. An expense is
Case 8-24 1. a. The predetermined overhead rate would be computed as follows: Expected manufacturing overhead cost $3,000,000 = Estimated direct labour-hours 50,000 DLHs =$60 per DLH b. The unit product
Which of the following is correct? Units sold=opening finished goods units + Units produced Closing finished goods units Units Sold = Units produced + Closing finished goods units - Opening finished goods
Solution Manual for Introduction to Managerial Accounting 7th Edition by Brewer Garison and Noreen Link download: https://digitalcontentmarket.org/download/solution-manual-for-introduction-tomanagerial-accounting-7th-edition-by-brewer-garison-and-noreen/
Part 1 Study Unit 5 Cost Accumulations Systems Jim Clemons, CMA Ronald Schmidt, CMA, CFM 1 Overview Cost accounting systems record manufacturing activities using a perpetual inventory system, which continuously
Cost Accounting A Managerial Emphasis Sixth Canadian Edition 6th Edition by Horngren Test Bank Link download full: You can see more Solutions Manual for Cost Accounting A Managerial Emphasis Sixth Canadian
EXCEL PROFESSIONAL INSTITUTE 2.2 MANAGEMENT ACCOUNTING LECTURES 2 HOLY KPORTORGBI Requirements CA Textbook Excel 2.2 Management Revision Kit Punctuality in class Weekly assignments Mocks Common mistakes
Introduction: - Product costing Systems are used to compute the product cost per unit. - Product cost per unit is needed for a variety of purpose: o In financial accounting; Used to value inventory and |
Fate of gravitational collapse in semiclassical gravity
While the outcome of gravitational collapse in classical general relativity is unquestionably a black hole, up to now no full and complete semiclassical description of black hole formation has been thoroughly investigated. Here we revisit the standard scenario for this process. By analyzing how semiclassical collapse proceeds we show that the very formation of a trapping horizon can be seriously questioned for a large set of, possibly realistic, scenarios. We emphasise that in principle the theoretical framework of semiclassical gravity certainly allows the formation of trapping horizons. What we are questioning here is the more subtle point of whether or not the standard black hole picture is appropriate for describing the end point of realistic collapse. Indeed if semiclassical physics were in some cases to prevent formation of the trapping horizon, then this suggests the possibility of new collapsed objects which can be much less problematic, making it unnecessary to confront the information paradox or the run-away end point problem.
pacs:04.20.Gz, 04.62.+v, 04.70.-s, 04.70.Dy, 04.80.Cc
Although the existence of astrophysical black holes is now commonly accepted, we still lack a detailed understanding of several aspects of these objects. In particular, when dealing with quantum field theory in a spacetime where a classical event horizon forms, one encounters significant conceptual problems, such as the information-loss paradox linked to black hole thermal evaporation Hawking:science ; hawking-ter ; hawking-paradox ; preskill .
The growing evidence that black hole evaporation may be compatible with unitary evolution in string-inspired scenarios (see, e.g., reference AdS )111See, however, a recent article by D. Amati Amati:2006fr for an alternative point of view on the significance of these results. has in recent years led to a revival of interest in, and extensive modification of, early Roman-Bergmann alternative semiclassical scenarios for the late stages of gravitational collapse hayward ; ashtekar-bojowald . (See also tipler ; bardeen ; york .) Indeed, while it is by now certain that the outcome of a realistic classical collapse is necessarily a standard black hole delimited by an event horizon (that is, a region of the total spacetime which does not overlap with the causal past of future null infinity: ), it has recently been suggested that only apparent or trapping horizons might actually be allowed in nature, and that somehow semiclassical or quantum gravitational ashtekar-bojowald ; hawking-info ; mathur effects could prevent the formation of a (strict, absolute) event horizon,222“The way the information gets out seems to be that a true event horizon never forms, just an apparent horizon”. (Stephen Hawking in the abstract to his GR17 talk hawking-info .) and hence possibly evade the necessity of a singular structure in their interior.
Note that Hawking radiation would still be present, even in the absence of an event horizon hajicek ; essential . Moreover, the present authors have noticed that, kinematically, a collapsing body could still emit a Hawking-like Planckian flux even if no horizon (of any kind) is ever formed at any finite time quasi-particle-prl ;333Recently, it was brought to our attention that this possibility was also pointed out in a paper by P. Grove grove . all that is needed being an exponential approach to apparent/trapping horizon formation in infinite time. Since in this case the evaporation would occur in a spacetime where information by construction cannot be lost or trapped, there would be no obstruction in principle to its recovery by suitable measurements of quantum correlations. (The evaporation would be characterized by a Planckian spectrum and not by a truly thermal one.)
Inspired by these investigations we wish here to revisit the basic ideas that led in the past to the standard scenario for semiclassical black hole formation and evaporation. We shall see that, while the formation of the trapping horizon (or indeed most types of horizon) is definitely permitted in semiclassical gravity, nonetheless the actual occurrence or non-occurrence of a horizon will depend delicately on the specific dynamical features of the collapse.
Indeed, we shall argue that in realistic situations one may have alternative end points of semiclassical collapse which are quite different from black holes, and intrinsically semiclassical in nature. Hence, it may well be that the compact objects that astrophysicists currently identify as black holes correspond to a rather different physics. We shall here suggest such an alternative description by proposing a new class of compact objects (that might be called ‘‘black stars’’) in which no horizons (or ergoregions) are present.444These “black stars” are nevertheless distinct from the recently introduced “gravastars” gravastar . The absence of these features would make such objects free from some of the daunting problems that plague black hole physics.
Ii Semiclassical collapse: The standard scenario
Let us begin by revisiting the standard semiclassical scenario for black hole formation. For simplicity, in this paper we shall consider only non-rotating, neutral, Schwarzschild black holes; however, all the discussion can be readily generalized to other black hole solutions.
Consider a star of mass in hydrostatic equilibrium in empty space. For such a configuration the appropriate quantum state is well known to be the Boulware vacuum state boulware , which is defined unambiguously as the state with zero particle content for static observers, and is regular everywhere both inside and outside the star (this state is also known as the static, or Schwarzschild, vacuum birrell-davies ). If the star is sufficiently dilute (so that the radius is very large compared to ), then the spacetime is nearly Minkowskian and such a state will be virtually indistinguishable from the Minkowski vacuum. Hence, the expectation value of the renormalized stress-energy-momentum tensor (RSET) will be negligible throughout the entire spacetime. This is the reason why, when calculating the spacetime geometry associated with a dilute star, one only needs to care about the classical contribution to the stress-energy-momentum tensor (SET).
Imagine now that, at some moment, the star begins to collapse. The evolution proceeds as in classical general relativity, but with some extra contributions as spacetime dynamics will also affect the behaviour of any quantum fields that are present, giving place to both particle production and additional vacuum polarization effects. Contingent upon the standard scenario being correct, if we work in the Heisenberg picture there is a single globally defined regular quantum state that describes these phenomena.
For simplicity, consider a massless quantum scalar field and restrict the analysis to spherically symmetric solutions. Every mode of the field can (neglecting back-scattering) be described as a wave coming in from (i.e., from , ), going inwards through the star till bouncing at its center (), and then moving outwards to finally reach . As in this paper we are going to work in dimensions (i.e., we shall ignore any angular dependence), for later notational convenience instead of considering wave reflections at we will take two mirror-symmetric copies of the spacetime of the collapsing star glued together at (see Fig. 1). In one copy will run from to , and in the other from to . Then one can concentrate on how the modes change on their way from (i.e., , ) to (i.e., , ). Hereafter, we will always implicitly assume this construction and will not explictly specify “left” and “right” except where it might cause confusion.
Now, one can always write the field operator as
where are the modes that near behave asymptotically as555We work in natural units.
with and . One can then identify the state as the one that is annihilated by the destruction operators associated with these modes: . (One could also expand using a wave packet basis hawking-ter , which is a better choice if one wants to deal with behaviour localized in space and time.) Since the spacetime outside the star is isometric with a corresponding portion of Kruskal spacetime, and is static in the far past, the modes have the same asymptotic expression as the Boulware modes boulware near (i.e., for ). Hence , the quantum state corresponding to the physical collapse, is (near ) indistinguishable from the Boulware vacuum . (But this will of course no longer be true as one moves significantly away from .)
Now, the semiclassical collapse problem consists of studying the evolution of the geometry as determined by the semiclassical Einstein equations
where is the classical part of the SET. Significant deviations from the classical collapse scenario can appear only if the RSET in equation (3) becomes comparable with the classical SET. In this analysis there are (at least) two important results from the extant literature that have to be taken into account:
If a quantum state is such that the singularity structure of the two-point function is initially of the Hadamard form, then Cauchy evolution will preserve this feature FSW , at least up to the edge of the spacetime (which might be, for instance, a Cauchy horizon HawBook ). The state certainly satisfies this Hadamard condition at early times fnw , hence it must satisfy it also in the future, even if a trapping/event horizon forms. (A trapping/event horizon is not a Cauchy horizon, and is not an obstruction to maintaining the Hadamard condition.) As a consequence of this fact the RSET cannot become singular anywhere on the collapse geometry, independently of whether or not a trapping/event horizon is formed.666It is important to understand exactly what this theorem does and does not say: If we work in a well-behaved coordinate system (where the matrix of metric coefficients is nonsingular and has finite components), then the coordinate components of the RSET are likewise finite. But note that finite does not necessarily imply small.
For specific semiclassical models of the collapsing star it has been numerically demonstrated (modulo several important technical caveats) that the value of the RSET remains negligibly small throughout the entire collapse process, including the moment of horizon formation pp .777Similar results were, after some discussion, found in -dimensional models based on dilaton gravity 2d . Subsequently, in this scenario quantum effects manifest themselves via the slow evaporation of the black hole.
Thus in this standard scenario nothing prevents the formation of trapped regions (or trapped/apparent/event horizons). Given that quantum-induced violations of the energy conditions visser ; cosmo99 are taken to be small enough at this stage of the collapse, one can still use Penrose’s singularity theorem to argue that a singularity will then tend to form. Assuming that quantum gravity effects will not conspire to avoid this conclusion, then, in conformity with all extant calculations and the cosmic censorship conjecture, a spacelike singularity and a true event horizon will form. The collapsed star settles down in a quasi-static black hole and then ultimately evaporates.
This last feature can be easily derived by considering an expansion of the field in a basis which contains modes that near (i.e., for , ), behave asymptotically as
with and , so defining creation and annihilation operators that differ from those associated with the modes of equation (2). In a static configuration a (spherical) wave coming from is blue-shifted on its way towards the center of the star, and is then equally red-shifted on its way out to , arriving there undistorted. However, in a dynamically collapsing configuration the red-shift exceeds the blue-shift, so that an initial wave at is distorted once it reaches . In this sense the dynamical spacetime acts as a “processing machine” for the normal modes of the field. Expanding the distorted wave in terms of the undistorted basis at tells us the amount of particle creation due to the dynamics. In particular one can take a wave packet centered on frequency on and ask what its typical frequency, say , will be when it arrives on . The Bogoliubov coefficients that allow us to express the annihilation operators related to the modes (4) in terms of the creation operators pertaining to the modes (2) are related to the number of particles seen by asymptotic observers on , which is nothing else than the thermal flux of Hawking radiation Hawking:science ; hawking-ter ; birrell-davies .
This can be rephrased saying that the physical state corresponding to the collapse behaves like the Unruh vacuum unruh of Kruskal spacetime near the event horizon, , and near (i.e., for ). Indeed, in the Kruskal spacetime the Unruh state is a zero-particle state for a freely-falling observer crossing the horizon, and corresponds to a thermal flux of particles at the Hawking temperature for a static observer at infinity birrell-davies ; scd . Given that at late times classical black holes generated via classical gravitational collapse are virtually indistinguishable from eternal black holes (see, for instance, the classical theorem in wald ), the Unruh vacuum is the only quantum state on Kruskal spacetime which appropriately (near and ) simulates the physical vacuum in a spacetime with an event horizon formed via gravitational collapse.
However as previously mentioned, this standard scenario leads to several well known problems (or at the very least, disquieting features):
Modes corresponding to quanta detected at have an arbitrarily high frequency on (this is the so-called trans–Planckian problem unruh ).
The run-away end point of the evaporation process (the Hawking temperature is inversely proportional to the black hole mass) prevents any well-defined semiclassical answer regarding the ultimate fate of a black hole Hawking:science .
If eventually the black hole completely evaporates, leaving just thermal radiation in flat spacetime, then it would seem that nothing would prevent a unitarity-violating evolution of pure states into mixed states, contradicting a basic tenet of (usual) quantum theory (this is one aspect of the so-called information-loss paradox hawking-paradox ; preskill ). Such a difficulty for reconciliating quantum mechanics with general relativity seems to persist even when imagining many alternative scenarios for the end point of the evaporation, so that one can still continue to talk about an information-loss problem hawking-paradox ; preskill .
All in all, it is clear that this semiclassical collapse scenario is evidently plagued by significant difficulties and obscurities that still need to be understood. For this reason we think it is worthwhile to step back to a clean slate, and to revisit the above story uncovering all the hidden assumptions.
Iii Semiclassical collapse: A critique
It is easy to argue that one cannot trust a semiclassical gravity analysis once a collapsing configuration has entered into a high-curvature (Planck-scale) regime; this is expected in the immediate neighborhood of the region in which the classical equations predict the appearence of a curvature singularity. Once the formation of a trapped region is assumed, any solution of the problems mentioned above seems (naively) to demand an analysis in a full-fledged theory of quantum gravity. Here, however, we are questioning the very formation of a trapped region in astrophysical collapse. In analyzing this question we will see that semiclassical gravity provides a useful and sensible starting point. Moreover, we will also show that it provides some indications as to how the standard scenario might be modified.
iii.1 The trans–Planckian problem
One potential problem with the semiclassical gravity framework, when used to analyze the onset of horizon formation, is the trans–Planckian problem. While this problem is usually formulated in static spacetimes, for our purposes we wish to look back to its origin in a collapse scenario.
We can, as usual, encode the dynamics of the geometry in the relation between the affine null coordinates and , regular on and , respectively. Neglecting back-scattering, a mode of the form (2) near takes, near , the form
This can be regarded, approximately, as a mode of the type presented in equation (4), but now with -dependent frequency , where a dot denotes differentiation with respect to . (Of course, this formula just expresses the redshift undergone by a signal in travelling from to .)
In general we can expect a mode to be excited if the standard adiabatic condition
does not hold. It is not difficult to see that this happens for frequencies smaller than
One can then think of as a frequency marking, at each instant of retarded time , the separation between the modes that have been excited () and those that are still unexcited ().
Moreover, Planck-scale modes (as defined on ) are excited in a finite amount of time, even before the actual formation of any trapped region. Indeed, they start to be excited when the surface of the star is above the classical location of the horizon by a proper distance of about one Planck length, as measured by Schwarzschild static observers. We can see this by observing that the red-shift factor satisfies
where is the surface gravity. This then implies , where we have used . Hence
Hence, the trans–Planckian problem has its roots at the very onset of the formation of the trapping horizon. Furthermore, any complete description of the semiclassical collapse cannot be achieved without at least some assumptions about trans–Planckian physics.
Of course, one can simply assume that there is a natural Planck-scale frequency cutoff for effective field theory in curved spacetimes. Although one cannot completely exclude this possibility, we find that this way of avoiding the trans–Planckian problem is perhaps worse than the problem itself, as it would automatically also imply a shut-down of the Hawking flux in a finite (very small) amount of time. This would eliminate the thermodynamical behaviour of black holes, thus undermining the current explanation for the striking similarity between the laws of black hole mechanics and those of thermodynamics — that they are, in fact, just the same laws davies .
Moreover, such a “hard cutoff” obviously corresponds to a breakdown of Lorentz invariance at the Planck scale. If one is ready to accept such a departure from standard physics, then it seems more plausible (less objectionable?) to conjecture a milder breaking of Lorentz invariance in the form of a modified dispersion relation, a possibility explored in several works on the trans–Planckian problem Jacobson:1999zk . While it is seemingly well understood that the Hawking radiation would survive in this case US , it is however less clear what effect such modified dispersion relations might have on the possibility of forming a (presumably frequency-dependent) trapping horizon, and indeed, on the very definition of such a concept Barcelo:2006yi .
In what follows we shall adopt a conservative approach and stick, as is usually done, to the standard framework of quantum field theory in curved spacetime, assuming its validity up to arbitrarily high frequencies. Even in the presence of Lorentz violating effects, this would remain a valid framework if, for example, the scale at which Lorentz violations might appear was much higher than the Planck scale Jacobson .
iii.2 Vacuum polarization
The other difficulties of the standard scenario previously listed have been linked by different authors to the presence of horizons and of trapping regions in general. As we have previously discussed, several departures from semiclassical gravity have often been called for in order to solve these problems. However, the specific question we now want to raise here is rather different: Is the scenario just described guaranteed to be the one actually realized in semiclassical gravity? Or is it possible that semiclassical gravity allows for alternative endpoints of gravitational collapse, in which these problems are not present? In order to answer these questions we look for possible semiclassical effects which could modify the collapse before the very formation of a trapped region.
In any calculation of semiclassical collapse the choice of the propreties of the matter involved (which will be encoded in the characteristics of the classical SET) is, obviously, of crucial importance. Normally the initial conditions at early times are chosen so that one has a static star with any quantum field in their “natural” vacuum state. As we have discussed, this will be virtually indistinguishable from the Boulware vacuum state. In this initial configuration we are sure that the RSET is practically zero throughout spacetime, at least before the collapse is initiated. We now want to inquire into the possibility that such a RSET becomes non-negligible during the collapse.
In the standard semiclassical scenario, it is crucial that the initial Boulware-like structure of the field modes at is somehow “excited” by the collapse and converted into a Unruh-like structure at both and — this is necessary for compatibility with the presence of a trapping horizon. In fact, if this excitation and conversion were not to be sufficiently effective so as to to get rid of Boulware-like modes in the proximity of the would-be horizon, then a potential obstruction to the very formation of the horizon may arise. We know in fact that in static geometries there is an intrinsic incompatibility between the Boulware vacuum and the existence of a trapping horizon, as the RSET near the horizon (in a simplified calculation in 1+1 dimensions) is found to be dfu
where we work in an orthonormal basis. A similar result remains valid in the more complicated -dimensional case scd . The important point is that the denominator vanishes at the horizon. Hence the RSET acquires a divergent (and energy condition violating visser ) contribution. Note that the divergence is present even if the components of the RSET are evaluated in a freely-falling basis scd . (To see that something intrinsic is going on at the horizon it is sufficient to calculate the scalar invariant , and to note that this scalar diverges at the horizon.)
Of course the above result applies to a static spacetime, while we are interested in investigating an intrinsically dynamical scenario, which we moreover know, due to the Fulling–Sweeny–Wald theorem FSW , should act in such a way as to avoid the above divergence. We are hence interested in seeing the precise way in which this happens, and in exploring whether it might leave a route to possibly obtaining large, albeit finite, contributions to the RSET at the onset of horizon formation.
Iv The RSET
In calculating the RSET in a dynamical collapse several choices must be made. The major assumption is that we shall for the time being restrict attention to dimensions, since then there is a realistic hope of carrying out a complete analytic calculation. Physically, this is not as bad a truncation as it at first seems, since we can always view it as an -wave approximation to full -dimensional problem, with at most a few actors of being inserted at strategic places. (For instance, this analytic approximation underlies the subsequent numerical calculation of Parentani and Piran pp .) A second significant choice we will make is to specifically work in a regular coordinate system, in particular, in Painlevé–Gullstrand coordinates pg ; analogue . In regular coordinate systems (where the matrix of metric coefficients is both finite and non-singular), the values of the stress-energy-momentum components are direct and useful diagnostics of the “size” of the stress-energy-momentum tensor.
With reference to the diamond-shaped conformal diagram of Fig. 1, we shall start by considering a set of affine coordinates and , defined on and respectively. These coordinates are globally defined over the spacetime and the metric can be written as
Given that we shall be concerned with events which lie outside of the collapsing star on the right-hand side of our diagram, we can also choose a second double-null coordinate patch , where is taken to be affine on , in terms of which the metric is
where describes the coordinate transformation. Then
Furthermore, as long as we are outside the collapsing star it is safe to assume that a Birkhoff-like result holds, and take as being that of a static spacetime.
Now for any massless quantum field, the RSET (corresponding to a quantum state that is initially Boulware) has components dfu ; birrell-davies
The coefficients arising here are not particularly important, and will in any case depend on the specific type of quantum field under consideration.
The components and will necessarily be well behaved throughout the region of interest; in particular they are the same as in a static spacetime and are known to be regular. On the contrary shows a more complex structure due to the non-trivial relation between and . A brief computation yields
The key point here is that we have two terms, one () arising purely from the static spacetime outside the collapsing star, and the other () arising purely from the dynamics of the collapse. If, and only if, the horizon is assumed to form at finite time will the leading contributions of these two terms cancel against each other — this is the standard scenario.
Indeed the first term is exactly what one would compute from using standard Boulware vacuum for a static star. As the surface of the star recedes, more and more of the static spacetime is “uncovered”, and one begins to see regions of the spacetime where the Boulware contribution to the RSET is more and more negative, in fact diverging as the surface of the star crosses the horizon.
iv.2 Regular coordinates
To probe the details of the collapse, it is useful to introduce yet a third coordinate chart — a Painlevé–Gullstrand coordinate chart in terms of which the metric is quasi-particle-prl ; pg ; analogue
This coordinate chart is particularly useful because it is regular at the horizon, so that the finiteness of the stress-energy-momentum components in this chart has a direct physical meaning in terms of regularity of the stress-energy-momentum tensor.888These coordinates are also useful as they allow to straightforwardly apply our calculations to acoustic analogue spacetimes (provided one is in a regime in which one could neglect the existence of modified dispersion relations) quasi-particle-prl ; analogue . By setting the spacetime interval to zero, it is easy to see that the null rays are given by
Although inside the collapsing star the metric can depend on and in a complicated way, the geometry outside the surface of the star is taken to be static, so the functions and do not depend on . Under these conditions we can integrate along the history of an outgoing ray from an event just outside the collapsing star to another event at asymptotic future infinity :
Assuming asymptotic flatness, and , we find for the null coordinate in the “out” region,
Hence, denoting partial derivatives by subscripts:
In contrast, along an incoming ray leaving asymptotic past infinity at an event and remaining outside the star,
so we have, for the null coordinate:
In addition, by substituting and comparing coefficients of the line element, it is easy to see that the and coordinates are related by
Therefore the components of the RSET can be calculated in any of the equivalent forms:
Some of these formulae are more useful for calculating the static Boulware contribution, others are more useful for calculating the dynamical contribution. Since at a horizon, while is regular, this is enough to guarantee that the and components of the RSET are always better behaved (less divergent) than the component. Note that no divergence can arise from the terms proportional to .
iv.3 Calculation assuming normal horizon formation
Hereafter, we shall for simplicity restrict our attention to the case . Placing the horizon at for convenience, we can write the asymptotic expansion
where can be identified with the surface gravity quasi-particle-prl ; analogue .
Consider first the static Boulware term in equation (18). We have (placing the horizon at for convenience)
The relevant derivative in is then that with respect to , and we can write
In fact, keeping the subleading terms one finds
By equations (IV.2) and (38), it is clear that because of the constant term , the components and of the RSET contain contributions that diverge as and , respectively, as . (The sub-leading terms lead to finite contributions of order and respectively.)
In counterpoint, assuming horizon formation, let us now calculate the dynamical contribution to the RSET (). It is well known that any configuration that produces a horizon at a finite time leads to an asymptotic (large ) form
where and are suitable constants. Taking into account the asymptotic expression (40) for near , it is very easy to see that the potential divergence at the horizon due to the static term is exactly cancelled by the dynamical term. In this way we have recovered the standard result that the RSET at the horizon of a collapsing star is regular.
However, the previous relation is an asymptotic one, and for what we are most interested in (the value of the RSET close to horizon formation) it is important to take into account extra terms that will be subdominant at late times. Indeed, we can describe the location of the surface of a collapsing star that crosses the horizon at time by
where the expansion makes sense for small values of , and represents the velocity with which the surface crosses the gravitational radius. Let be the time at which a right-moving light ray corresponding to null coordinates and crosses the surface of the star. Then on the one hand
which for (implying ) can be approximated by
On the other hand, since is simply some regular function, we have
Inserting (48) into (49) we obtain an asymptotic expansion
which it is useful to write as
where is a regular function such that . Then
The point is that this has a universal contribution coming from the surface gravity, plus messy subdominant terms that depend on the details of the collapse. It is important to note, however, that the corresponding additional contributions to the RSET are finite, in contrast to the one associated with the first term. Indeed, for small values of ,
and so the second term in the right-hand side of equation (52) is , and by equation (38) gives an contribution to that does not depend on , but depends on time as . In addition, from a comparison of equations (48)–(50) we see that
so the leading subdominant term in the RSET is inversely proportional to the square of the speed with which the surface of the star crosses its gravitational radius. In particular, at horizon crossing, that is at , the value of the RSET can be as large as one wants provided one makes very small. This would correspond to a very slow collapse in the proximity of the trapping horizon formation. Thus, there is a concrete possibility that (energy condition violating) quantum contributions to the stress-energy-momentum tensor could lead to significant deviations from classical collapse when a trapping horizon is just about to form.
iv.4 Calculation assuming asymptotic horizon formation
Another interesting case one may want to consider is one in which the horizon is never formed at finite time, but just approached asymptotically as time runs to infinity. In particular, in reference quasi-particle-prl it was shown that collapses characterized by an exponential approach to the horizon,
lead to a function of the form
where is half the harmonic mean between and the rapidity of the exponential approach ,
so that one always has . In this case, the calculation of the dynamical part of the RSET leads to exactly the same result that when using expression (44), modulo the substitution of by . However, the non-dynamical part of the RSET remains unchanged. This implies that now, at leading order
which obviously diverges in the limit . We stress that this result does not contradict the Fulling–Sweeny–Wald theorem FSW , as the calculation applies only outside the surface of the star (i.e., for ), and so the divergence appears only at the boundary of spacetime. Nevertheless, particularizing to , this again indicates that there is a concrete possibility that energy condition violating quantum contributions to the stress-energy-momentum tensor could lead to significant deviations from classical collapse when a trapping horizon is on the verge of being formed.
iv.5 Physical insight
The key bits of physical insight we have garnered from this calculation are:
In the standard collapse scenario the regularity of the RSET at horizon formation is due to a subtle cancelation between the dynamical and the static contributions.
Contributions that can be neglected at late times can indeed be very large at the onset of horizon formation. The actual value of these contributions depends on the rapidity with which the configuration approaches its trapping horizon.
Once the horizon forms, the above contributions will be exponentially damped with time. However, the analysis of the configuration that approches horizon formation asymptotically tells us that, while horizon formation is delayed, there are contributions that will keep growing with time.
Hence apparently the RSET can acquire large (and energy condition violating visser ) contributions when a collapsing object approaches its Schwarzschild radius, depending on the details of the dynamics. The final lesson to draw from this part of our investigation is that not all the classical matter configurations compatible with the formation of a trapping horizon in classical general relativity necessarily lead to the same final state when semiclassical effects are taken into account. In particular, for classical collapses that exhibit a slow approach to horizon formation, our calculation indicates that there will be a large (albeit always finite in compliance with FSW ) contribution from the RSET, a contribution which can potentially lead the semiclassical collapse to classically unforeseeable end points. For these reasons we wish next to further explore the alternative situation in which the horizon is only formed asymptotically.
V A quasi-black hole scenario
The history of the confrontation between general relativity and quantum physics has already shown several times that the quantum mechanical effects in matter can prevent the formation of black holes in situations in which classically such formation would seem unavoidable. Without quantum mechanics, objects such as white dwarfs and neutrons stars would have never been predicted in the first place. Similarly, in this paper we have seen that if for any reason the collapse of the matter forces it into some (metastable) state in which horizon formation is approched sufficiently slowly, then large quantum vacuum effects could prevent the very formation of a trapping horizon. The resulting object could then be considered the most compact and quantum mechanical kind of star. These objects, which we shall tentatively call ‘‘black stars’’,999Newtonian versions of “black stars”, more often called “dark stars”, have a very long history in astrophysics, dating back to Michell Michell and Laplace Laplace . For recent commentary on the historical connections between Michell, Cavendish, and Laplace, see Lynden-Bell . would be supported by a form of quantum pressure of universal nature, being characterized only by their closeness to the formation of a trapping horizon.
Lacking an understanding of the physics of matter at densities well beyond that characterizing neutron stars, we cannot reliably assert anything about the stability of black stars. However, the first motivation for our investigation was to see whether semiclassical physics can allow for compact objects closely mimicking black hole features, including Hawking radiation, without incurring in the same problems plaguing the standard scenario. In this sense, static configurations do not seem viable candidates as the absence of a trapping horizon together with the staticity prevents any possibility of emission of a Hawking flux.101010It is perhaps worthwhile to stress here that such static black stars do not belong to the class of objects known in the literature as gravastars, (at least not without the addition of considerable extra assumptions), given than the former are compact aglomerates of matter while the latter have a de Sitter-like interior gravastar . On the other side, evolving configurations that continue to asymptotically approach their would-be horizon111111This approach could be completely monotonic or have oscillating components. These oscillations can also produce burst of radiation at the Hawking temperature thooft . would produce quantum radiation at late times.
In order for such a scenario to be realized in nature one can speculate that in some cases, once matter has slowed down the collapse so allowing for the piling up of a sizeable RSET, the latter would not be able to completely stop the collapse, but would instead lead to an evolving configuration where every layer of the collapsing star would lie very close to where the classical horizon of the matter inside it would be located, continually and asymptotically approaching it. We can call this object a “quasi-black hole”.
In order to know exactly how the star asymptotically approaches the horizon in this scenario, one should solve Einstein’s semiclassical field equations with back-reaction — obviously a very difficult task. Without the result of such an explicit calculation, it is nevertheless reasonable to conjecture that the approach can either follow a power law, or be exponential with a timescale , say. The case of a power law seems, however, uninteresting for our purposes, because it would not lead to a Planckian emission quasi-particle-prl . On the contrary, an exponential approach is associated with the emission of radiation at a modified temperature quasi-particle-prl . At least for astrophysical black holes, it is also reasonable to think that at the beginning of the evaporation process, so that , indistinguishable from the standard Hawking temperature. During evaporation increases so, in the long run, is determined by and tends to zero. Hence we could in principle have a “graceful exit” from the evaporation process; that is, one could avoid the standard run-away endpoint. Meanwhile, the evaporation could be visualized as a continuous chasing between the surface of the star and its (receding) Schwarzschild radius.
Indeed, possibilities for such a never-ending collapse were already envisaged in 1976, soon after the discovery of Hawking radiation boulware2 ; gerlach and have been recently proposed again Vachaspati:2006ki (although via different back-reaction mechanisms). It is important, however, to understand that in the quasi-black hole scenario we discuss here the Hawking flux only affects late-time evolution, and is not the agent that prevents horizon formation in the first place. The initial slow-down of the collapse is in this case due to matter-related high energy physics. This provides the time necessary for the vacuum polarization to grow and finally modify the evolution of the collapse toward an asymptotic regime.
Of course, the state at is reached only after a very long time (for typical estimates of the evaporation timescale, see reference birrell-davies ), so according to this scenario a collapsing star forms an object that, for a long period, is indistinguishable from a standard black hole, further justifying our nomenclature of “quasi-black hole”. This object would still evaporate with a Planckian spectrum quasi-particle-prl , but (since there is no event horizon) it would not be truly “thermal” (the quantum state is indeed a squeezed state squeezed ), hence there would be no information-loss problem. The partners of the particles emitted towards infinity, instead of being accumulated inside the trapping horizon as in the standard scenario, would now simply be emitted with a (significant) temporal delay. The radiation received at one instant of time would be correlated with that arriving some time later, so all the information would be recovered in the resulting radiation.
How does back-reaction work in this scenario? During the late time asymptotic collapse, two processes unfold at the same time: (1) the energy associated with vacuum polarization becomes more and more negative; (2) radiation is emitted towards infinity. During a time interval as measured on , an arrival of energy is recorded by observers at infinity. Correspondingly, vacuum polarization leads to an extra energy (due to the fact that the star becomes more compact), so the Bondi mass of the object decreases by an amount . By energy conservation, one expects that , so the emission of radiation is balanced by the increase of vacuum polarization nearby the central object. This balance makes the Bondi mass of the object decrease as if it were taken away by radiation, eventually reducing to zero as . Note that the expression (10) for the RSET can be rewritten in such a way as to exhibit the fact that vacuum polarization corresponds to the absence of black-body radiation at the temperature scd . Although this does not constitute a proof, it is a strong plausibility argument in favour of the energy balance between radiation and vacuum polarization. Also, it strongly suggests that the asymptotic approach to the would-be horizon must be of the exponential type, rather than a power law. Indeed, since a power law would not lead to a Planckian emission, it would be hard to reconcile it with the result presented in reference scd .
Thus, provided that trapping horizons do not form, we have described a plausible scenario for the progressive collapse and evaporation of quasi-black holes. However, the end point of this process seems to still share a problem with the standard scenario: The apparent accumulation of baryon number within the collapsing object boulware2 . The least massive baryon one can find is the proton. Baryon number is conserved in all experiments realized up to now, and in particular, the proton has been found to be stable (nevertheless, Grand Unification Theories predict it should eventually disintegrate into leptons). In the standard paradigm for the evaporation of a black hole, the trapping horizon and its surroundings is an empty region of spacetime. Therefore, there is only one physical quantity characterizing the quantum emission: The value of its Hawking temperature. For a standard evaporating black hole to be able to nucleate a proton-antiproton pair, it seems necessary that it reaches a temperature larger than K, or equivalently, a tiny mass of less than , where is the mass of one proton. However, for example, a black hole having initially one solar-mass would contain a baryon number of around . During the evaporation it would conserve this baryon number till it reaches a Bondi mass of . But then, even emitting all its remaining energy in the form of baryons (with emission in the form of protons being the most efficient way of removing baryon number), it would end up either: (1) leaving an almost massless relic having a baryon number of (a rather peculiar state); or (2) completely evaporating producing an enormous violation of baryon-number conservation.
The quasi-black hole scenario, however, adds one extra ingredient to the previous discussion: The would-be horizon and its surroundings is now not an empty region of spacetime. In the vicinity of the would-be horizon there is always matter progressively being compressed. This fact could significantly affect the way the quasi-black hole radiates its energy. For example, an upper bound for the average density of a solar mass quasi-black hole is given by that of the corresponding black hole (a few times bigger than that of a typical neutron star). At these densities and higher, it is quite plausible that new particle physics effects could come into play and deplete the baryon number much more efficiently than the evaporation process.121212Of course, for very massive quasi-black holes such effects will be negligible for a very long time, but will eventually become important as the Bondi mass is decreased by the combined effect of Hawking radiation and vacuum polarization.
Up to this point we have only considered spherically symmetric configurations. However, current observations tell us that most of the observed black hole candidates have a high rate of rotation, sometimes very close to extremality rotation . Hence, for a quasi-black hole scenario to be a feasible description of these objects, it would be necessary to generalize our proposal to rotating configurations. Given the complexity of the vacuum structure around rotating black holes ottewill it is very difficult to have a precise proposal in this sense. However, we know that any rotating object possessing an ergoregion but not a horizon would be highly unstable cardoso . Hence we expect that any viable model of a rotating quasi-black hole should be characterized by a matter distribution extending up to the outer boundary of the ergoregion.
The fact that most of the progenitors of the observed black hole candidates are characterized by supercritical rotations (, where is the angular momentum of the progenitor) is often used as evidence of the validity of the cosmic censorship conjecture. It is interesting to note that if such conjecture holds for standard general relativity it would also be effective in preventing super-critical quasi-black holes. In order to understand this point it is enough to realize that a generalization of the calculation of this article to more general metrics allowing for extremality (e.g., Reissner–Nordström, Kerr, …) would still imply a pile up of the RSET in proximity of the “would-be horizon” if and only if such a horizon can form in the first place. That is, a large quantum-induced RSET can arise only if the collapsing object has already shed the extra charges (e.g., electric charge or angular momentum) so as to be subcritical in proximity of the horizon crossing. So supercritical configurations are likely to be unaffected by the vacuum polarization and behave as in classical general relativity. On the contrary sub-critical configurations will develop (or not develop) trapping horizons according to the details of the dynamics.
Quantum physics imposed upon the description of the collapse of astrophysical objects in situations that would classically lead to black hole formation could unexpectedly lead to observable effects at early times, when the trapping horizon is about to form. In particular we have shown that before forming a trapping horizon, trans–Planckian modes are excited. Hence, whether the trapping horizon forms or not depends critically on assumptions concerning the net effect of any trans–Planckian physics that might be at work.
Assuming that quantum field theory holds unmodified up to arbitrarily high energy (as is commonly done in most of the extant literature) we have shown that there can be large deviations from classical collapse scenarios, if the latter do allow in the first place a piling up of vacuum energy. Most of the classical collapse scenarios so far considered do not allow for such a piling up, due to their intrinsic rapidity. In this sense the prediction of horizon formation in many of these models pp ; massar seems completely correct.
We have argued however, that alternative classical collapse scenarios in which horizon formation is approached in a slow manner are not only foreseeable, but possibly natural in more realistic situations. If this is indeed the case one then would have to add a new class of compact, horizonless, objects (possibly the most compact objects apart from black holes themselves) to the astrophysical bestiary: the black stars.
In the final part of this work we have then considered a particular subclass of these objects, the quasi-black holes, which could closely mimic all the most relevant features of black hole physics, while avoiding at the same time most of its intrinsic problems (such as singularities, the information paradox, and the question of the end point of Hawking evaporation).
Summarizing, the quasi-black hole scenario for collapse and evaporation is the following one (see Fig. 2):
As a star of mass implodes we conjecture that its matter will try to adjust in new, possibly unstable, configurations so to reach a new equilibrium against gravity. If there is ever a significant slowing down of the collapse, for any reason whatsoever, then this allows the vacuum polarization to progressively grow, and further slow down the approach to trapping horizon formation. Provided such an approach is asymptotic with an exponential law controlled by a timescale , then the quantum radiation produced during this process is still Planckian, with a temperature , where is inversely proportional to the total Bondi mass of the star quasi-particle-prl . For a long time, and , so and (from the point of view of an external observer) the object is essentially indistinguishable from a standard evaporating black hole. The emission of radiation is accompanied by an increase in vacuum polarization, that progressively diminishes the Bondi mass of the star, so the would-be horizon shrinks and is never crossed by the matter configuration. When the Bondi mass has become sufficiently small, is negligible and the temperature is approximately equal to . This quantity is also decreasing, because back-reaction is in fact slowing down collapse, so the temperature, after reaching a maximum value, decreases and approaches zero.
We do not yet have a definitive proposal as to the end-point of the evaporation process. This could only be achieved by understanding the physics of baryon nucleation in the presence of high-density states of matter. The end state of the evaporation could correspond to a zero-temperature relic131313Note that the nature of such a relic would be quite different from that of a standard black hole remnant, because the relic could be regarded just as a peculiar case of a very compact star. For this reason, the usual issues related to remnants (like the compatibility with CPT invariance or their capacity for storing information) are not present in this scenario. with vanishing Bondi mass (hence would at large distances be gravitationally inert), with an inner structure formed by a core with mass and a non-vanishing baryon number, immersed into a cloud of polarized vacuum with negative energy . Alternatively, it might correspond to plain vacuum.
We are grateful to Daniele Amati, Larry Ford, Ted Jacobson, John Miller, José Navarro–Salas, Tom Roman, and Bernard Whiting for critically reading a preliminary version of this paper and for stimulating discussions. We would like to also thank Renaud Parentani and Robert Wald for their comments. CB has been funded by the Spanish MEC under project FIS2005-05736-C03-01 with a partial FEDER contribution. CB and SL are also supported by an INFN-MEC collaboration. MV was supported by a Marsden grant administered by the Royal Society of New Zealand, and also wishes to thank both SISSA/ISAS (Trieste) and IAA (Granada) for hospitality.
- (1) S. W. Hawking, Nature 248, 30–31 (1974).
- (2) S. W. Hawking, Commun. Math. Phys. 43, 199–220 (1975); Erratum: ibid. 46, 206 (1976).
- (3) S. W. Hawking, Phys. Rev. D 14, 2460–2473 (1976).
- (4) J. Preskill, arXiv:hep-th/9209058.
- (5) O. Lunin and S. D. Mathur, Nucl. Phys. B 623, 342–394 (2002) [arXiv:hep-th/0109154].
- (6) D. Amati, in String Theory and Fundamental Interactions, edited by M. Gasperini and J. Maharana, Lecture Notes in Physics 737 (Berlin, Springer, 2008) [arXiv:hep-th/0612061].
- (7) T. A. Roman and P. G. Bergmann, Phys. Rev. D 28, 1265–1277 (1983).
- (8) S. A. Hayward, Phys. Rev. Lett. 96, 031103 (2006) [arXiv:gr-qc/0506126].
- (9) A. Ashtekar and M. Bojowald, Class. Quantum Grav. 22, 3349–3362 (2005) [arXiv:gr-qc/0504029]; Class. Quantum Grav. 23, 391–411 (2006) [arXiv:gr-qc/0509075].
- (10) F. J. Tipler, Phys. Rev. Lett. 45, 949–951 (1980).
- (11) P. Hajicek and W. Israel, Phys. Lett. A 80, 9–10 (1980). J. M. Bardeen, Phys. Rev. Lett. 46, 382–385 (1981).
- (12) J. W. York, Jr., Phys. Rev. D 28, 2929–2945 (1983).
- (13) S. W. Hawking, in General Relativity and Gravitation: Proceedings of the 17th International Conference, edited by P. Florides, B. Nolan and A. Ottewill (Singapore, World Scientific, 2005), pp. 56–62; Phys. Rev. D 72, 084013 (2005) [arXiv:hep-th/0507171].
- (14) S. D. Mathur, in Quantum Theory and Symmetries, edited by P. C. Argyres, T. J. Hodges, F. Mansouri, J. J. Scanio, P. Suranyi and L. C. R. Wijewardhana (Singapore, World Scientific, 2004), pp. 152–158 [arXiv:hep-th/0401115]; Fortsch. Phys. 53, 793–827 (2005) [arXiv:hep-th/0502050]; Class. Quantum Grav. 23, R115–R168 (2006) [arXiv:hep-th/0510180].
- (15) P. Hajicek, Phys. Rev. D 36, 1065–1079 (1987).
- (16) M. Visser, Int. J. Mod. Phys. D 12, 649–661 (2003) [arXiv:hep-th/0106111].
- (17) C. Barceló, S. Liberati, S. Sonego and M. Visser, Class. Quantum Grav. 23, 5341–5366 (2006) [arXiv:gr-qc/0604058]; Phys. Rev. Lett. 97, 171301 (2006) [arXiv:gr-qc/0607008].
- (18) P. G. Grove, Class. Quantum Grav. 7, 1353–1363 (1990).
- (19) P. O. Mazur and E. Mottola, arXiv:gr-qc/0109035; in Quantum Field Theory Under the Influence of External Conditions, edited by K. A. Milton (Princeton, Rinton Press, 2004), pp. 350–357 [arXiv:gr-qc/0405111]; Proc. Nat. Acad. Sci. 111, 9545–9550 (2004) [arXiv:gr-qc/0407075]. M. Visser and D. L. Wiltshire, Class. Quantum Grav. 21, 1135–1152 (2004) [arXiv:gr-qc/0310107]. C. Cattoen, T. Faber and M. Visser, ibid. 22, 4189–4202 (2005) [arXiv:gr-qc/0505137].
- (20) D. G. Boulware, Phys. Rev. D 11, 1404–1423 (1975).
- (21) N. D. Birrell and P. C. W. Davies, Quantum Fields in Curved Space (Cambridge, Cambridge University Press, 1982).
- (22) S. A. Fulling, M. Sweeny and R. M. Wald, Commun. Math. Phys. 63, 257–264 (1978).
- (23) M. Visser, in The Future of Theoretical Physics and Cosmology, edited by G. W Gibbons, E. P. S. Shellard and S. J. Rankin (Cambridge, Cambridge University Press, 2003), pp. 161–176 [arXiv:gr-qc/0204022].
- (24) S. A. Fulling, F. J. Narcowich and R. M. Wald, Ann. Phys. (N.Y.) 136, 243–272 (1981).
- (25) R. Parentani and T. Piran, Phys. Rev. Lett. 73, 2805–2808 (1994) [arXiv:hep-th/9405007].
- (26) C. G. Callan, Jr., S. B. Giddings, J. A. Harvey and A. Strominger, Phys. Rev. D 45, R1005–R1009 (1992) [arXiv:hep-th/9111056]. T. Banks, A. Dabholkar, M. R. Douglas and M. O’Loughlin, ibid. 45, 3607–3616 (1992) [arXiv:hep-th/9201061]. J. G. Russo, L. Susskind and L. Thorlacius, Phys. Lett. B 292, 13–18 (1992) [arXiv:hep-th/9201074]. T. Piran and A. Strominger, Phys. Rev. D 48, 4729–4734 (1993) [arXiv:hep-th/9304148].
- (27) M. Visser, Phys. Rev. D 54, 5103–5115 (1996) [arXiv:gr-qc/9604007]; ibid. 54, 5116–5122 (1996) [arXiv:gr-qc/9604008]; ibid. 54, 5123–5128 (1996) [arXiv:gr-qc/9604009]; ibid. 56, 936–952 (1997) [arXiv:gr-qc/9703001]; arXiv:gr-qc/9710034. C. Barceló and M. Visser, Int. J. Mod. Phys. D 11, 1553–1560 (2002) [arXiv:gr-qc/0205066].
- (28) M. Visser and C. Barceló, arXiv:gr-qc/0001099.
- (29) W. G. Unruh, Phys. Rev. D 14, 870–892 (1976).
- (30) P. Candelas, Phys. Rev. D 21, 2185–2202 (1980). D. W. Sciama, P. Candelas and D. Deutsch, Adv. Phys. 30, 327–366 (1981).
- (31) I. Racz and R. M. Wald, Class. Quantum Grav. 9, 2643–2656 (1992).
- (32) P. C. W. Davies, Rep. Prog. Phys. 41, 1313–1355 (1978).
- (33) T. Jacobson, Prog. Theor. Phys. Suppl. 136, 1–17 (1999) [arXiv:hep-th/0001085].
- (34) W. G. Unruh and R. Schutzhold, Phys. Rev. D 71, 024028 (2005) [arXiv:gr-qc/0408009].
- (35) C. Barceló, A. Cano, L. J. Garay and G. Jannes, Phys. Rev. D 74, 024008 (2006) [arXiv:gr-qc/0603089].
- (36) T. Jacobson, S. Liberati and D. Mattingly, Ann. Phys. (N.Y.) 321, 150–196 (2006) [arXiv:astro-ph/0505267].
- (37) P. C. W. Davies, S. A. Fulling and W. G. Unruh, Phys. Rev. D 13, 2720–2723 (1976). S. M. Christensen and S. A. Fulling, ibid. 15, 2088–2104 (1977). P. C. W. Davies and S. A. Fulling, Proc. R. Soc. Lond. A 354, 59–77 (1977).
- (38) P. Painlevé, C. R. Acad. Sci. (Paris) 173, 677–680 (1921). A. Gullstrand, Ark. Mat. Astron. Fys. 16, 1–15 (1922).
- (39) C. Barcelò, S. Liberati, S. Sonego and M. Visser, New J. Phys. 6, 186 (2004) [arXiv:gr-qc/0408022]. C. Barceló, S. Liberati and M. Visser, Living Rev. Relativity 8, 12 (2005) [arXiv:gr-qc/0505065]; URL (cited on 28 November 2007): http://www.livingreviews.org/lrr-2005-12
Reverend John Michell, FRS, “On the Means of discovering the Distance, Magnitude, etc. of the Fixed
Stars, in consequence of the Diminution of the Velocity of their Light, in
case such a Diminution should be found to take place in any of them, and
such other Data should be procured from Observations, as would be farther
necessary for that Purpose”, Philosophical Transactions of the Royal Society of London 74, 35–57
(1784); reprinted in Black Holes, edited by S. Detweiler (Stony Brook, AAPT, 1982), pp. 8–18. (Warning: The correct spelling is Michell, not Mitchell, though
usage is somewhat inconsistent.) The original reference is not easy
to obtain and we provide a brief quotation:
[…] if the semi-diameter of a sphære of the same density with the sun were to exceed that of the sun in the proportion of 500 to 1, a body falling from an infinite height towards it, would have acquired at its surface a greater velocity than that of light, and consequently, supposing light to be attracted by the same force in proportion to its vis inertiæ [inertial mass], with other bodies, all light emitted from such a body would be made to return towards it, by its own proper gravity.
Pierre Simon Marquis de Laplace, Exposition du Système du Monde (Paris, Imprimerie du Cercle Social, 1796). The intimate connection between the work of Michell and Laplace can
clearly be seen from the following quotation:
A luminous star, of the same density as the earth, and whose diameter should be two hundred and fifty times larger than that of the Sun, would not, in consequence of its attraction, allow any of its [light] rays to arrive at us; it is therefore possible that the largest luminous bodies in the universe may, through this cause, be invisible.
- (42) W. Israel, in 300 Years of Gravitation, edited by S. W. Hawking and W. Israel (Cambridge, Cambridge University Press, 1987), pp. 199–276. D. Lynden-Bell, in The Cental Kiloparsec of Starbursts and AGN: The La Palma Connection, edited by J. H. Knapen, J. E. Beckman, I. Shlosman and T. J. Mahoney, ASP Conference Proceedings 249 (San Francisco, Astronomical Society of the Pacific, 2001), pp. 212–229 [arXiv:astro-ph/0203480].
- (43) C. R. Stephens, G. ’t Hooft and B. F. Whiting, Class. Quantum Grav. 11, 621–648 (1994) [arXiv:gr-qc/9310006].
- (44) D. G. Boulware, Phys. Rev. D 13, 2169–2187 (1976).
- (45) U. H. Gerlach, Phys. Rev. D 14, 1479–1508 (1976).
- (46) S. Sonego, J. Almergren and M. A. Abramowicz, Phys. Rev. D 62, 064010 (2000) [arXiv:gr-qc/0005106]. T. Vachaspati, D. Stojkovic and L. M. Krauss, ibid. 76, 024005 (2007) [arXiv:gr-qc/0609024].
- (47) F. Belgiorno, S. Liberati, M. Visser and D. W. Sciama, Phys. Lett. A 271, 308–313 (2000) [arXiv:quant-ph/9904018]. T. Jacobson, in Lectures on Quantum Gravity, edited by A. Gomberoff and D. Marolf (Springer, 2005), pp. 39–89 [arXiv:gr-qc/0308048].
- (48) S. N. Zhang, W. Cui and W. Chen, Ap. J. 482, L155–L158 (1997) [arXiv:astro-ph/9704072]. R. Shafee, J. E. McClintock, R. Narayan, S. W. Davis, L. X. Li and R. A. Remillard, Ap. J. 636, L113–L116 (2006) [arXiv:astro-ph/0508302]. R. Narayan, J. E. McClintock and R. Shafee, arXiv:0710.4073 [astro-ph].
- (49) A. C. Ottewill and E. Winstanley, Phys. Rev. D 62, 084018 (2000) [arXiv:gr-qc/0004022].
- (50) V. Cardoso, P. Pani, M. Cadoni and M. Cavaglià, arXiv:0709.0532 [gr-qc].
- (51) S. Massar, Phys. Rev. D 52, 5857–5864 (1995) [arXiv:gr-qc/9411039]. |
Download Binomial Probability Multiple Choice Questions from our fatest mirrorData Management - Newmarket High School
8301 dl's @ 5999 KB/s
... These are also called Bernoulli Trials Examples A coin is flipped A test of true and false questions A test of multiple choice questions Pulling a block out of a bag that contains 5 red blocks and 7 blue ... Probability in a Binomial Distribution Lets look back at our Example
Date added: April 8, 2014 - Views: 1
... and standard deviation for the variable of a binomial distribution Situations that have only ... for a multiple-choice quiz with 25 questions, ... satisfy the following requirements Binomial or Not? Binomial Probability Formula Binomial Distribution Examples Examples ...
Date added: January 23, 2013 - Views: 20
... x = number of successes observed when experiment is performed The probability distribution of x is called the binomial probability distribution. The ... A professor routinely gives quizzes containing 50 multiple choice questions with 4 possible answers, ...
Date added: August 5, 2013 - Views: 9
Binomial Probability: In a binomial experiment there are two ... Ex (1) - A test consists of 10 multiple choice questions with five choices for each ... you GUESS on each and every answer without even reading the questions. What is the probability of getting exactly 6 ...
Date added: April 23, 2014 - Views: 6
The quiz consists of 10 multiple-choice questions (n=10). ... Once we know a random variable is binomial, we can calculate the probability associated with each value of the random variable from the binomial distribution: ...
Date added: August 23, 2011 - Views: 147
The probability that a shopper will buy a ... then the probability that 20 or more shoppers buy a packet is approximately... The random variable X is binomial. P ... 4. The probability of success is fixed from trial to trial, p = 0.25. There are 20 multiple choice questions on an exam, each ...
Date added: December 11, 2011 - Views: 31
A multiple choice test contains 20 questions. Each question has five choices for the correct answer. ... n, the number of trials; and, p, the probability of success. We denote this Binom(n, p). The Binomial Model (cont.) Binomial probability model for Bernoulli trials: Binom(n,p) ...
Date added: August 25, 2011 - Views: 29
Example Suppose that there 3 multiple choice problems ... What is the probability that he will answer all questions ... Understandable Statistics Microsoft Equation 3.0 Chapter 5 The Binomial Probability Distribution and Related Topics 5.1 Introduction to Random Variables and ...
Date added: March 6, 2014 - Views: 4
Multiple Choice Test continued. Suppose we have a multiple choice test with 4 questions, each with 5 choices. Find the probability we get: Only the first one correct
Date added: May 15, 2013 - Views: 9
CHAPTER 5 Binomial and Poisson Probability Distributions The Poisson Distribution Discrete Probability Distribution Binomial Distribution Example 1: Suppose you take a 25 question multiple choice test where each question has 5 choices.
Date added: September 8, 2013 - Views: 2
... Example If a student randomly guesses at five multiple-choice questions, find the probability ... Example If a student randomly guesses at five multiple-choice questions ... X = 3, and p = 1/5. Then, P(3) = [5!/((5 – 3)!3! )](1/5)3(4/5)2 0.05. 6-4 Binomial Probability ...
Date added: May 6, 2013 - Views: 8
The quiz consists of 10 multiple-choice questions. Each question has five possible answers, ... What is the probability that he gets between 2 and 7 questions right? ... Binomial is right skewed, π > 0.5, ...
Date added: September 17, 2011 - Views: 32
Each of the 10 questions is a multiple choice ... Binomial Settings You just experienced a binomial setting. In fact, most multiple choice ... Whether we got one question correct or not had no impact on whether we got the next question correct or not The probability of success remains ...
Date added: August 2, 2013 - Views: 5
... for the 76 students? * An examination consists of multiple-choice questions, each ... Assume an multiple-choice examination ... Concourse 1_Concourse 2_Concourse 3_Concourse 4_Concourse 5_Concourse 6_Concourse 7_Concourse Equation AP Statistics – Probability Random ...
Date added: June 12, 2012 - Views: 20
Binomial Distribution. ... The probability is the area under the curve. The total area or probability under the curve is 1. It is completely determined by the mean and standard deviation. ... Binomial Distribution. Example 1. A Stat 216 multiple choice quiz has 5 questions, ...
Date added: May 5, 2013 - Views: 12
Chapter 15 Binomial Distribution Properties Two possible outcomes (success and failure) A fixed number of experiments (trials) The probability of success, denoted ... 0.745 d) 1 – (.303+.118) = .575 Example A statistics test contains 5 multiple choice questions, each of which has four choices ...
Date added: December 15, 2013 - Views: 7
Binomial Probability Distribution Example ... Characteristics n = 5 p = 0.1 n = 5 p = 0.5 Mean Standard Deviation Thinking Challenge You’re taking a 33 question multiple choice ... Statistics for Business and Economics Chapter 3 Probability Review questions Q1 The masterfoods ...
Date added: March 4, 2014 - Views: 6
You have 3 multiple choice questions left and each question has 4 choices. 1) ... Calculate the probabilities and then make a graph of the binomial probability distribution that shows the probabilities that 0,1,2,3,4,5, or all 6 lone diners leave tips.
Date added: January 18, 2014 - Views: 5
... ,n, and therefore the binomial variable is discrete. Binomial Random Variable * The Binomial Probability Distribution The binomial probability distribution is described by the following closed ... The quiz consists on 10 multiple choice questions with 5 possible choices for each ...
Date added: August 3, 2013 - Views: 10
Calculating the Binomial Probability Solution – Continued Determining the binomial probabilities: Let X = the number of correct answers Calculating the Binomial Probability Solution ... The quiz consists on 10 multiple choice questions with 5 possible choices for each question, ...
Date added: June 5, 2012 - Views: 17
The probability that a shopper will buy a packet of ... 4. The probability of success is fixed from trial to trial, p = 0.25. There are 20 multiple choice questions on an exam, each ... Let the random variable X count the number of incorrect responses. We note this is a binomial ...
Date added: May 7, 2012 - Views: 17
... Construct a binomial probability histogram with n = 8 and p = 0.85. For each histogram, ... 4 = 0.204 n = 5 x = 1 p = 1/20 q = 19/20 A test consists of 10 multiple choice questions, each with four possible answers. To pass the test, one must answer at least nine questions correctly.
Date added: September 8, 2013 - Views: 29
Probability Distributions The Probability that a single event, ... of canoe for each camper Outcome Probability 4.6 7/25 5.0 10/25 5.2 4/25 6.1 4/25 5.096 Ex A test consists of 5 multiple choice questions each with 4 possible answers, ...
Date added: April 19, 2014 - Views: 3
... On a multiple-choice quiz, Joan guesses on each ... What is the probability that Joan answers 2 questions correctly ... 483 More Probability The Binomial and Geometric Distributions What you’ll learn The conditions for a binomial setting Calculating binomial probabilities How to ...
Date added: October 14, 2011 - Views: 30
AP Statistics Jeopardy Binomial Probability Joint Probability Conditional ... Answer A multiple choice quiz has 10 multiple choice questions, each with 5 choices. Find P(no questions answered correctly) Binomial Probability ...
Date added: November 12, 2011 - Views: 31
A binomial probability distribution results from a procedure that meets all the following requirements. ... Several economics students are unprepared for a multiple-choice quiz with 25 questions, and all of their answers are guesses.
Date added: May 24, 2013 - Views: 10
The quiz consists of 10 multiple-choice questions. Each question has five possible answers, only one of which is correct. ... hence we want to know P(x=2) Cumulative Probability… Thus far, we have been using the binomial probability distribution to find probabilities for individual values of x.
Date added: September 10, 2012 - Views: 35
Using your method of choice, find the following probability: P ... You take a multiple choice quiz that consists of 10 questions. Each question has four possible answers, one of which is correct. ... Binomial Probability Formula.
Date added: December 20, 2013 - Views: 11
Binomial Probability Distributions Created by ... Using Technology STATDISK, Minitab, Excel and the TI-83 Plus calculator can all be used to find binomial probabilities. Excel TI-83 Plus ... Several economics students are unprepared for a multiple –choice quiz with 25 questions, ...
Date added: April 6, 2014 - Views: 2
... S Tossing a coin Head Tail ½ # tosses # heads Birth of a child Girl Boy .48 # children # girls Multiple Choice Correct Wrong 1 /5 # questions ... Multiple Choice Correct Wrong 1/5 # questions # correct ... Binomial Variables Probability Distribution for a ...
Date added: September 17, 2011 - Views: 18
... who hasn’t studied at all and answers T/F questions correctly with probability .5, and multiple-choice questions correctly with ... 21/23/25 October 2002 Prof. Marie desJardins TOPICS Permutations Combinations Binomial theorem Discrete probability Probability theory MON 10/21 ...
Date added: April 17, 2013 - Views: 7
Binomial Probability . Suppose the probability of being left-handed is 0.1 and you want to find the probability that 2 out of 3 people will be left-handed. ... Ellen takes a multiple-choice quiz that has 7 questions, with 4 answer choices for each question.
Date added: February 25, 2014 - Views: 6
... Learn that the sampling distribution of follows a normal model Thinking Challenge You’re taking a 33 question multiple choice ... is over all possible values of x. Discrete Probability Distribution Example ... Express the binomial probability to be approximated by the form ...
Date added: September 1, 2013 - Views: 11
Discrete Probability Distributions 7.* 7.* ... The binomial distribution is the probability distribution that results from doing a “binomial experiment”. ... The quiz consists of 10 multiple-choice questions.
Date added: May 6, 2013 - Views: 4
... .8912 x = 5 1.0000 0.9994 0.9955 0.9834 0.9580 x = 6 1.0000 0.9999 0.9991 0.9955 0.9858 Poisson tables Calculate the probability that i) ... Binomial Not binomial Binomial Binomial distribution Joan takes a multiple choice examination consisting of 40 questions.
Date added: May 4, 2013 - Views: 10
... the probability for any specific outcome is defined as a fraction of all the possible outcomes. ... the binomial distribution has the following parameters: ... A multiple choice test has 48 questions, ...
Date added: July 10, 2014 - Views: 1
At each pin, each ball randomly drops to the right with probability p and to the left with probability 1 ... Binomial Not binomial Binomial Joan takes a multiple choice examination consisting of 40 questions. X is the number of questions answered correctly if she chooses each answer completely ...
Date added: July 31, 2014 - Views: 1
... in any small time interval, there is a certain probability that a strike will occur. Discontinuities in ... Binomial. Coin tosses. Two experts ... B gets 14 right. What is confidence that B is “better” than A. Multiple choice tests. 50 questions, 3 possible answers each. A gets 21/50 ...
Date added: May 27, 2013 - Views: 4
... Continuous distribution Mound or bell shaped Symmetric Good approximation to many variables Central role in inferential statistics Normal Distribution Described by a ... Problem 3 You take a multiple choice quiz that consists of 10 questions. ... Use the binomial probability formula ...
Date added: December 9, 2013 - Views: 18
... A mid-term exam has 30 multiple choice questions, ... having guessed at least 18 questions ... Investment Doing It in PHStat Important Discrete Probability Distributions Binomial Probability Distribution Binomial Probability Distribution Binomial Probability Distribution ...
Date added: August 30, 2013 - Views: 4
... Discrete Probability Distributions Discrete Probability Distributions Binomial Hypergeometric Poisson Chap 5-* Binomial Probability Distribution ‘n’ Identical Trials E.g., ... Binomial Distribution A mid-term exam has 30 multiple choice questions, ...
Date added: December 4, 2013 - Views: 4
Binomial Distributions The formula for the binomial probability mass function is: ... there were 31 multiple choice questions. ... 9th and 10th Grade PD Session 3 Author: Britaa01 Last modified by: ulatma01
Date added: September 10, 2013 - Views: 14
Calculate Probabilities for Discrete Random Variables Thinking Challenge You’re taking a 33 question multiple choice test. Each ... Trials Are Independent Binomial Probability Distribution Function Binomial Probability ... Address questions & comments to: John J. McGill, Ph.D ...
Date added: June 27, 2013 - Views: 6
Below are some examples of binomial experiments: Suppose the probability of being left-handed is 0.1 and you want to find the probability that 2 out of 3 people will be left-handed. ... Example 3a Wendy takes a multiple-choice quiz that has 20 questions.
Date added: April 4, 2014 - Views: 2
Below are some examples of binomial experiments: Suppose the probability of being left-handed is 0.1 and you want to find the probability that 2 out ... Example 2b Ellen takes a multiple-choice quiz that has 5 questions, ... Holt Algebra 2 11-6 Binomial Distributions Holt Algebra 2 11-6 Binomial ...
Date added: April 28, 2014 - Views: 1
On a multiple choice exam with 3 possible answers for each of 5 questions, what is the probability a student would get 4 or more answers correct just by guessing? ... If his daily demand is a binomial random variable with n = 10 and p = 1/3, ...
Date added: March 7, 2012 - Views: 25
4-2 Binomial Probability Distributions 4-3 ... distribution worksheet See also coin example worksheet Method 3 TI-83 Calculator Finding Binomial Probabilities (complete ... the mean and standard deviation for students that guess answers on a multiple choice test with 5 answers and 20 ...
Date added: April 25, 2012 - Views: 28
Discrete Random Variables and Probability Distributions. ... A multiple-choice test contains 10 questions, each with 4 choices, and you guess. X ... The random variable X that equals the number of trials that result in a success is a binomial random variable with parameters 0 < p < 1 and n = 0 ...
Date added: February 13, 2013 - Views: 15
... Discrete Random Variables Thinking Challenge You’re taking a 33 question multiple choice test. Each ... guessed on all 33 questions, what ... Binomial Distribution Properties Binomial Probability Distribution Function Binomial Probability Distribution Example Binomial ...
Date added: December 10, 2013 - Views: 7
Recognize the Binomial probability distribution and apply it appropriately. Calculate and interpret expected value (average). Discrete Random Variables. Types. General. ... 21 multiple choice questions. The last . 3 quarters’ exams. What to bring with you.
Date added: June 20, 2013 - Views: 12 |
Many of us are familiar with accuracy specifications pertaining to force measurement, usually a percentage of full scale or a percentage of reading. While this is broadly understood, the waters become a bit murkier in applications involving both force and distance measurements.
Many critical test results are values derived from distance, such as spring rate, modulus of elasticity, and elongation at failure. In cases like these, confident and reproducible test methods rely on distance accuracy, making it critical to understand a given force tester’s true distance measurement accuracy.
Spring Rate: The Basic Interplay Between Force and Distance
Spring rate is a popular test result that demonstrates the mutual importance of force and distance accuracies. A spring rate test can evaluate any ordinary tension or compression spring or, more generally, the stiffness of any mechanical part.
This test result characterizes how much force must be applied to a sample to create a given amount of deflection.
To visualize this, imagine wanting to know if the spring rate of a given compression spring is within its engineering tolerance. To make this determination, one would typically compress the spring down to a predetermined height (Height1) and measure the force (Force@H1) at that height. The spring would then be further compressed to a second height (Height2) and a second force measurement (Force@H2) would be captured. Inputting these values into the spring rate equation:
Although solvable, a downside of using this spring rate calculation to evaluate acceptance criteria is its inherent complexity. This calculation requires managing four different variables across two different dimensions.
Acceptance Criteria Within a Single Dimension
To simplify the calculations performed at the end of each test run, there is a workaround: use an offline calculation to determine what force value should be measured at each height. Then, by comparing the target force values (calculated before the test) to the measured force values (captured during the test), the test results and acceptance criteria are confined within just the force dimension.
To illustrate this simplified approach, let us assign some values to a hypothetical sample:
|Spring Rate||50 lbF/in, ± 5%|
|Free Length||4 in|
|Height 1||3 in|
|Height 2||2 in|
To solve for the Target Force at Height 1, these values are put into the formula:
The target force values are 50 lbF and 100 lbF. Since the spring rate has a tolerance of ±5%, the acceptance ranges become 47.5 to 52.5 lbF and 95.0 to 105.0 lbF.
Blind Spot: Indirect Force Measurement Errors
While it is convenient to isolate all acceptance criteria within the force dimension, one pitfall of this approach is that users may gloss-over the measurement errors that occur within the distance dimension. This can be problematic because our testing application is designed to examine the relationship between force and distance. And, in this test, force is the dependent axis—meaning it depends on the independent axis, distance. So, any errors that occur within the distance dimension will necessarily impact results captured within the force dimension.
For example, suppose a force tester was configured to compress the spring down to a height of 3 inches. But, because of distance inaccuracy, the machine instead moved a bit further down to a height of 2.9 inches. Even though the machine display should show a measurement of 3 inches, it would remain unaware that the spring is actually being over-compressed by an extra 0.1 inch. With a spring rate of 50 lbF/in, the force channel would be measuring 5 lbF higher than if it were at a true 3 inch height.
Therefore, to determine if a given force tester is sufficiently accurate, we must account for inaccuracies in both its force and distance channels. Errors that originate in the force channel are usually quite easy to determine via calibration and verification against a known standard. Errors that originate in the distance channel take a little extra work to estimate.
Examining Test Machine Accuracies
Let us continue to flesh out this scenario by estimating the combined force measurement error for this application. And let us use some real-world specifications from an actual testing system, specifically a single-column force tester commonly used in lower force applications.
The clearest expression of distance measurement accuracy states the following:
- A maximum fixed error, ex. ±0.002 in
- Any error related to the position along the testing machine’s column
- Any error related to the amount of force produced
The last two accuracy components above are important because a single-column testing machine’s column deflects under load, due to the cantilever effect of the crosshead. The amount of deflection can vary greatly as the crosshead travels from the bottom of the column to the top.
The load cell (force sensor) also deflects under load. This amount of deflection is linear and can be easily predicted.
These components may be compensated via the machine’s software, however, not all force testers have this ability, nor are those with this ability necessarily factory-compensated.
For this exercise, let us assume that an in-tolerance sample could require forces up to 105 lbF, so it would make sense to configure our machine with a 200 lbF capacity load cell. Assuming an accuracy of ±0.1% of full scale, the testing system will carry an error of ±0.2 lbF throughout its measurement range while affording us ample measurement headroom.
As we move on to our control axis, we note a distance accuracy specification of ±0.002 in, valid over the entire length of stroke and over the entire range of forces. This specification will be uncommonly easy to work with, but makes our calculations easier.
In a previous example, we used the nominal spring rate to estimate how much force error would result when distance inaccuracy caused us to unknowingly over-compress a spring sample. We can use that same concept to characterize how much force error will be contributed by the distance accuracy specification:
In this application, the error contribution from distance accuracy (±0.1 lbF) is estimated to be just half of the error that comes directly from the force channel (±0.2 lbF). Playing it conservatively, we can add these two error values together to arrive at a combined error estimate of ±0.3 lbF.
A Universal Quandary: How Much Accuracy Is Enough?
Of the two measurement points in this application, the 3 in height requires the tightest accuracy, so we should use that as our worst-case scenario. At this height, the force target of 50 lbF has a tolerance of ±2.5 lbF. Comparing the ±2.5 lbF engineering tolerance to our combined force error estimate of ±0.3 lbF, we can calculate the test accuracy ratio (TAR):
TAR is a simple way of relating the measurement accuracy of a testing system to the acceptance criteria for the testing application. The basic rule of thumb on TAR has been to have at least a 4:1 ratio, so the more than 8:1 ratio here should deliver a strong level of confidence to our conformance decisions.
This accuracy analysis points to a central question: How much accuracy is enough? Let us work through some examples.
Capturing a force measurement at the 3-inch height point, let us first consider an ideal scenario: our measured value lands right at 50.0 lbF. This is great because our sample appears to be precisely at the target force value, in the exact middle of our acceptance band.
But, since there will likely be some measurement error, there is a good chance that the true value is not exactly 50 lbF. Our combined force error was ±0.3 lbF, so we can be confident that the true value is somewhere between 49.7 and 50.3 lbF.
Expressing this TAR scenario graphically:
Above, the vertical green line represents the target force value of 50 lbF. The blue bell curve is the measurement error distribution of ±0.3 lbF, showing where the true value is most likely to fall. The orange rectangle is the acceptance criteria of ±2.5 lbF.
Since even the outskirts of the measurement error distribution (blue curve) come nowhere close to the tolerance limits of the spring (orange rectangle), the odds of this sample actually being out-of-tolerance is minimal. As mentioned, having the measured value land right on the target value is the ideal situation.
But what happens if we instead had a measured value of 52.0 lbF? Since the upper limit of the acceptance band is 52.5 lbF, measuring 52.0 lbF still puts the sample within tolerance, right?
Since the force tester under discussion affords us relatively tight accuracies, the error distribution of the measurement is tucked safely within the acceptance tolerance band. It would be safe to conclude that this sample is still within tolerance.
But we must acknowledge that there is less certainty in this conformance decision than when the measurement result landed right on the target value. The second TAR scenario demonstrates that tighter accuracy specifications become especially valuable whenever an in-tolerance sample veers closer to its tolerance limit.
Finally, let us reconsider the second measurement scenario, but this time using a less accurate testing system. Let us assume that it meets the minimum 4:1 TAR guidance with a combined measurement error of ±0.625 lbF.
Inserting these lower-performance specifications into the previous measurement scenario:
The concern here is the section of the error distribution (in red) that extends beyond the upper tolerance limit. The section in red shows us the added risk we will carry if we use this measurement as the basis for an in-tolerance conformance decision.
When in doubt, specify a force tester with the tightest and most complete distance accuracy specification necessary for the application. As these TAR scenarios demonstrate, using a higher specification system allows users to accept in-tolerance samples more confidently without the baggage of added risk. |
Error propagation and Sahelanthropus tchadensis
A recent article by Lebatard and many co-authors in the Proceedings of the National Academy of Sciences shows why correct error propagation is important. In this article, incorrect error propagation leads to a wrong conclusion. Correcting it largely invalidates the point of the paper. The article is open access and is here.
What these authors are trying to do is date the occurrence of the early human ancestor Sahelanthropus tchadensis in a lake sediment section in Chad. They use the Be-10/Be-9 ratio of the lake sediments to accomplish this, on the basis that the 10/9 ratio of leachable Be in the lake and surface sediments is constant through time. Thus, the present 10/9 ratio of lake sediments preserved in a stratigraphic section is related to their age by:
Where is the 10/9 ratio at the time of sediment deposition (assumed to be constant through time), is the measured 10/9 ratio in the sediments of unknown age, is the decay constant for Be-10 (4.99e-7 /yr) and is the age of the sediment.
This is straightforward except that there is no way to be sure from first principles that the key assumption — constant depositional 10/9 ratio over time — is true. Be-10 is supplied from fallout of cosmic-ray produced Be-10 in the atmosphere, which ought to be more or less steady. Be-9, on the other hand, comes from dissolution of Be-bearing minerals somewhere in the lake basin, which might not be steady. Thus, the only way to know whether this key assumption is true is to look at the change in the 10/9 ratio over time: if we see i) a smooth, steady decrease in 10/9 ratio with stratigraphic age, and ii) the same 10/9 ratio in stratigraphically closely spaced samples that should share the same age, then we might reasonably conclude that the depositional 10/9 ratio is more or less constant. The authors of this paper follow this sort of reasoning, as follows. First, they observe that ages from sets of samples from the same stratigraphic unit — which should be the same within measurement error — show values of the MSWD statistic that are near 1. MSWD near 1 indicates that the scatter in data is commensurate with the uncertainties in the data, i.e. no excess scatter is present. Second, they observe that average Be-10/Be-9 ages from certain stratigraphic intervals are broadly in agreement with biostratigraphic age constraints. These two observations lead them to conclude that the 10/9 ratio in lake sediments is constant through time.
The problem with this line of reasoning is that they have incorrectly calculated MSWD values, because of incorrect error propagation. This is clear from two observations. First, the stated relative uncertainties in the ages are greater than the relative uncertainties in the ratio measurements. For example, a sample at 6.1 meters in their section TM254 (readers who care may want to look at their Table 2 at this point) has an 8% measurement uncertainty on the ratio and an apparent age of 6.5 Ma. The reported uncertainty on the age is 0.75 Ma, a 10% uncertainty. This can’t be correct. Think about it — 7.2 Ma is five half-lives of Be-10. A 50% uncertainty on the ratio measurement would mean an uncertainty of one half-life, or 1.4 Ma. If the age is five half-lives, this is only 20% of the total age. So a 50% uncertainty in the ratio measurement becomes only a 20% uncertainty in the age estimate. The relationship is not quite linear, but this means that an 8% error in the ratio at this age should become something like a 4% error in the age. This is a general property of age uncertainties in radioactive decay systems: as age increases, the relative uncertainty on the age becomes much smaller than the relative uncertainty on the amount of parent remaining. Thus, the fact that these authors report relative age uncertainties that are larger than relative ratio measurement uncertainties indicates that something is wrong.
The other observation that clearly indicates that something is wrong comes from calculating the MSWD on the measured ratios, instead of the ages, for the sets of samples from particular stratigraphic levels that the authors have averaged. For example, six samples from 7.3-8.5 m in the TM254 section (Table 2 again) have 10/9 ratios that vary by a factor of 5 and have measurement uncertainties of 9-16 %. These measurements clearly do not belong to a single population and have a MSWD of 49.8. However, when the authors transform these ratios to ages, somehow the ages from the same samples have a MSWD of 1.1. If the ratios don’t belong to a single population, then clearly the ages derived from those ratios can’t belong to a single population either. Something is seriously wrong here.
An additional notable observation is that some of the MSWD values reported by the authors (0.10-0.28) are wildly improbable. This suggests overestimation of uncertainties.
So what happens if we do the error propagation correctly? Here is how to do the error propagation. Uncertainties in the ages come from three sources: i) uncertainty in the estimate for the depositional 10/9 ratio (), ii) uncertainty in the Be-10 decay constant, and iii) measurement uncertainties in the observed 10/9 ratio (). In computing uncertainties on ages for a MSWD calculation, we should only consider iii), the measurement error in the sample 10/9 ratio. This is because we are comparing different samples to each other, so must only consider errors that are independent between samples. The uncertainties in the decay constant and the initial ratio are common to all samples, so do not enter into a MSWD calculation. Using normal linear error propagation, the uncertainty in the age that should be used in calculating the MSWD is:
and is the uncertainty in the measured 10/9 ratio. The following table shows stratigraphic heights, measured ratios, ages, uncertainties reported by the authors, and actual uncertainties for the six samples in section TM254 discussed above.
|Stratigraphic ht (m)||10/9 ratio (x 10^-10)||Apparent age (Ma)||Reported age uncertainty||Correct age uncertainty|
|8.5||16.36 +/- 2.60||5.39||0.92||0.31|
|8.4||7.68 +/- 0.76||6.87||0.80||0.19|
|8.4||3.07 +/- 0.34||8.67||1.11||0.22|
|8.1||9.04 +/- 1.13||6.55||0.91||0.25|
|7.9||6.89 +/- 0.65||7.08||0.80||0.18|
|7.3||6.20 +/- 0.83||7.39||1.08||0.26|
For these six samples, again, the authors computed a MSWD for the ages of 1.14 using the incorrect age uncertainties. Based on this value, they concluded that the ages belonged to a single population and they could properly average them to obtain a summary age and standard error for this part of the stratigraphic section. This is incorrect. The actual MSWD of these ages is 18, clearly showing that the apparent ages do not belong to a single population, as we expect from the fact that the 10/9 ratios clearly do not belong to a single population.
A plot of ratios and ages from this section shows this situation clearly:
The plot on the left shows that 10/9 ratios, as expected from the general concept of the method, do generally decrease with stratigraphic depth. However, they are widely scattered around this general trend by an amount well in excess of measurement uncertainty. More about this later. The plot in the center shows apparent ages with the (incorrect) uncertainties reported by the authors. It is clear from this plot why getting the error propagation wrong leads to a misleading conclusion: the large errors here give the impression that the data are scattered around a smooth increase in age with depth, by an amount that is commensurate with the measurement error. The third plot shows the same apparent ages with the correct uncertainties. It is clear that although ages do generally increase with depth, ages from the same stratigraphic level, like ratios from the same stratigraphic level, disagree by amounts well in excess of measurement uncertainty. Note again that the fact that we have not included uncertainties in the initial ratio and the decay constant does not change these conclusions: errors in these parameters would shift the entire array of ages without changing the relationship between them.
To summarize, one of the key observations that the authors cite in support of their claim that the depositional 10/9 ratio is constant through time is that ages from closely spaced stratigraphic levels agree within uncertainty. Doing the error propagation correctly shows that, in fact, this is not the case. In fact, the spread of 10/9 ratios from closely spaced levels is well in excess of measurement uncertainty. This shows fairly clearly that, in fact, the 10/9 ratio was not constant over time. It most likely stayed within a certain range — as shown by the overall trend of decreasing ratio with stratigraphic depth — but varied by as much as a factor of 5 over short time intervals.
Why this variation? Remember Be-9 is delivered to the lake by dissolution of Be-bearing minerals in the watershed. It seems certain that the rate of Be-9 supply is affected by hydrologic changes, and the fact that the sediments in question show a fluctuating lake level indicates that there were hydrologic changes. Thus, it seems very likely that orbital-scale hydrologic changes affected Be-9 delivery to the lake, and thus the 10/9 ratio of leachable Be in the lake system, on relatively short time scales. In any case, the data in this paper pretty clearly show that the assumption of constant 10/9 ratio that is necessary to apply this dating method, is false at short time scales.
Is the entire dating exercise wrong then? Probably not. Clearly the assumption of strictly constant depositional 10/9 ratio is wrong. However, ratios do clearly decrease with age, showing that changes in the depositional ratio most likely took place on a shorter time scale than is represented by the entire section, and thus that the ratio probably stayed within some bounds. So the 10/9 ratios do give us some age information. The important conclusion is that the true uncertainty in the actual age of the samples that the authors dated is much bigger than the measurement uncertainty. If the initial ratio is only known to a factor of 4, then the age can only be known with a precision of two half-lives of Be-10, that is, 2.8 Ma. So the ages reported in this paper are most likely within two million years of the true age of the true age of the sediments, and fossils, in question. The important conclusion is that the precision of the Be-10/Be-9 method is much poorer than proposed by the authors. The authors call attention to the fact that the ages agree with biostratigraphic age constraints (which also have a precision of 1-2 Ma) and suggest that the 10/9 ages are more accurate than the biostratigraphic ages. In fact this is not the case; the precision of the two methods is similar.
Could this be fixed? Yes. The key is to know the amount and the time scale of changes in the depositional 10/9 ratio. These could easily be obtained by high-resolution sampling at the presumably orbital time scales that have the largest effect on the lake basin hydrology. Once known, this information could be used to find out what amount of time averaging would be needed to ensure that the constant-initial-ratio assumption was true.
Is Sahelanthropus tchadensis actually 6.8-7.2 Ma? Perhaps. Nothing in this paper disproves that hypothesis. However, it does not prove the hypothesis either. Likewise, the hypothesis that Australopithecus bahrelghazali at this site is contemporaneous with the extremely well dated (by Ar-Ar) Lucy skeleton in Ethiopia is neither supported nor refuted by the Be-10/Be-9 results. |
4 edition of Similarity solutions of systems of partial differential equations using MACSYMA. found in the catalog.
by Courant Institute of Mathematical Sciences, New York University in New York
|Statement||By P. Rosenau and J.L. Schwarzmeier.|
|Contributions||Schwarzmeier, J. L.|
|The Physical Object|
|Number of Pages||22|
This chapter discusses Pfaffian differential equations. Another name for a Pfaffian differential equation is a total differential equation. Pfaffian differential equations are partial differential equations of the form f(x) dx = ∑Fi(x 1, x 2, ,x n)dx i = 0. Systems of differential equations Handout Peyam Tabrizian Friday, November 18th, This handout is meant to give you a couple more example of all the techniques discussed in chapter 9, to counterbalance all the dry theory and complicated ap-plications in the differential equations book! Enjoy!:) Note: Make sure to read this carefully!
Systems of Differential Equations corresponding homogeneous system has an equilibrium solution x1(t) = x2(t) = x3(t) = This constant solution is the limit at infinity of the solution to the homogeneous system, using the initial values x1(0) ≈ File Size: KB. det(A rI) = 0: The determinant det(A rI) is formed by subtracting rfrom the diagonal of A. The polynomial p(r) = det(A rI) is called the characteristic polynomial. If Ais 2 2, then p(r) is a quadratic. If Ais 3 3, then p(r) is a cubic. The determinant is expanded by the cofactor rule, in order to preserve factorizations.
In this section we will give a review of the traditional starting point for a linear algebra class. We will use linear algebra techniques to solve a system of equations as well as give a couple of useful facts about the number of solutions that a system of equations can have. The Wolfram Language has powerful functionality based on the finite element method and the numerical method of lines for solving a wide variety of partial differential equations. The symbolic capabilities of the Wolfram Language make it possible to efficiently compute solutions from PDE models expressed as equations.
complete tales of Henry James; edited with an introduction by Leon Edel. Vol.8, 1891-1892.
Erection of memorials to commemorate encampments of Spanish War Organizations at Chickamauga and Chattanooga National Military Park.
short account of the establishment of the new See of Baltimore in Maryland
Railway accident: report on the collision that occurred on 8th April 1969at Monmore Green near Wolverhampton in the London Midland Region, British Railways.
Law school admission test
city of the King
Weldons practical guide to fancy work.
Meditations for Living in Balance
Survival in a down economy
The Sixth Symposium by the 5 Winners of the Grants from Sandoz Foundation for Gerontological Research
Cape Colour question
Full text of "Similarity solutions of systems of partial differential equations using MACSYMA" See other formats COO MF Courant Institute of Mathematical Sciences Magneto-Fluid Dynamics Division Similarity Solutions of Systems of. Excerpt from Similarity Solutions of Systems of Partial Differential Equations Using Macsyma Here we introduce the use of the algebraic computing system macsyma to facilitate these calculations.
Specifically, macsyma is used to calculate systematically the generators of the infinitesimal group under which the considered equations are : P. Rosenau. Similarity solutions of systems of partial differential equations using MACSYMA Item Preview remove-circle Share or Embed This Item.
Similarity solutions of systems of partial differential equations using MACSYMA by Rosenau, P; Schwarzmeier, J. Publication date PublisherPages: Symmetry and similarity solutions 1 Symmetries of partial differential equations New solutions from old Consider a partial differential equation for u(x;t)whose domain happens to be (x;t) 2R2.
It often happens that a transformation of variables gives a new solution to the equation. For example, if u(x;t) is a solution to the diffusion File Size: KB. Similarity solutions of partial differential equations using DESOLV exact solutions of systems of partial differential equations arising in fluid dynamics, continuum mechanics and general relativity are of considerable value for the light they shed into extreme cases which are not susceptible to numerical treatments.
Baumann, T.F Cited by: Two-dimensional diffusion processes are considered between concentric circles and in angular sectors. The aim of the paper is to compute the probability that the process will hit a given part of the boundary of the stopping region first.
The appropriate partial differential equations are solved explicitly by using the method of similarity solutions and the method of separation of : Mario Lefebvre. Inspire a love of reading with Prime Book Box for Kids Similarity Solutions of Systems of Partial Differential Equations Using MACSYMA J L Schwarzmeier.
Paperback. $ Handbook New Dowsing Ring Smart Home Security Systems eero WiFi Stream 4K Video in Every Room:Author: B R Books LLC. Chapter 3 Similarity Methods for PDEs In this chapter we present a brief summary of the similarity techniques that are one of the few general techniques for obtaining exact solutions of partial di erential equations.
Some of them are explained with the help of File Size: KB. Such solutions found by Lie's method, are called invariant solutions.
Essential to this approach is the need to solve overdetermined systems of "determining equations", which consist of coupled, linear, homogeneous, partial differential equations.
Typically, such systems vary between ten to several hundred equations. Using this isovector, the ordinary differential equations leading to the similarity solutions are found. The numerical solution of the equations are presented and.
independent variables. In this paper we present solutions that use similarity techniques to reduce the nonlinear partial differential equations to nonlinear ordinary differential equations, which may then be solved.
The technique can be viewed as an extension of similar techniques previously developed for the Einstein equations with two KillingAuthor: Elliot Fischer. Systems of Partial Differential Equations of General Form The EqWorld website presents extensive information on solutions to various classes of ordinary differential equations, partial differential equations, integral equations, functional equations.
Let us consider a partial di erential equation in the form f @c @t; @c @x; = 0: We study the existence and properties of similarity solutions.
Not all solutions to PDEs are similarity solutions, PDEs do not always have similar solutions, but when they exist, they shed light on the behaviour of more general Size: KB. The appropriate partial differential equations are solved explicitly by using the method of similarity solutions and the method of separation of variables.
Some solutions are expressed as generalized Fourier series. Introduction Let X 1 t,X 2 t be the two-dimensional diffusion process defined by the stochastic differ-ential equations dX. From the documentation: "DSolve can find general solutions for linear and weakly nonlinear partial differential equations.
Truly nonlinear partial differential equations usually admit no general solutions." While yours looks solvable, it probably just decides it can't do it. $\endgroup$ – Szabolcs Feb 14 '14 at Real systems are often characterized by multiple functions simultaneously.
The relationship between these functions is described by equations that contain the functions themselves and their derivatives. In this case, we speak of systems of differential equations. In this section we consider the different types of systems of ordinary differential equations, methods of their solving, and.
The purpose of these lectures is to show how the method of symmetry reduction can be used to obtain certain classes of exact analytic solutions of systems of partial differential equations.
We use the words “symmetry reduction” in a rather broad by: The goal of these methods is the expression of a solution in terms of quadrature in the case of ordinary differential equations of first order and a reduction in order for higher order equations.
For partial differential equations at least a reduction in the number of independent variables is sought and in favorable cases a reduction to. 41 Transforming Partial Differential Equations Systems of Differential Equations Systems of ODEs Systems of PDEs The Laplacian in Different Coordinate Systems Similarity Methods.
A code has been written to use the algebraic computer system MACSYMA to generate systematically the infinitesimal similarity groups corresponding to.
A Method for Generating Approximate Similarity Solutions of Nonlinear Partial Differential Equations Mazhar Iqbal, 1 M. T. Mustafa, 2 and Azad A. Siddiqui 3 1 Department of Basic Sciences and Humanities, EME College, National University of Sciences and Technology (NUST), Peshawar Road, RawalpindiPakistanCited by: 1.The method of characteristics is appropriate to solve initial value problems of hyperbolic type: semi linear first order differential equations, one-dimensional wave equation.
In principle all solutions can be found using this method. Similarity solutions are a special type of solutions that reflect invariant properties of the equation.The EqWorld website presents extensive information on solutions to various classes of ordinary differential equations, partial differential equations, integral equations, functional equations, and other mathematical equations. |
What is the complex cosine function? A complex cosine is a complex object with a complex structure. A cosine is not an Riemannian manifold. If you are resource in the complex structure of the complex manifold, then you want to know about how this complex structure is related to the complex structure on this manifold (see more about complex cosines). The complex structure of navigate here complex manifold is the complex structure that the complex manifold is really like. The complex structures that you want to measure are the complex structures that like it actually on the real line or on the complex plane. You can measure them by means of a complex structure on the complex manifold. You can also measure them via the complex structures of the real line and the complex plane: Imagine that you are measuring the complex structure from a plane of your choice. Then you can measure the complex structure by your complex structure on that plane. Now, if you have a complex structure for which you can find the complex structure, then you can measure it by the complex structure itself on that plane: For example, the complex structure for a two dimensional sphere can be measured by a complex structure if the complex structure is measured by one of the complex structures on the sphere, which is then called the real structure. Now, in physics, a complex structure is always a real structure, since it is a complex structure of real dimension 1. The real structure is the complex one. This is because the complex structure has no real dimension. A real structure is a complex 2-form, and it is a real structure of dimension 2. How does a complex structure change if you think of a real structure? The real space is the space of complex structures. A real space is a real manifold. What is the complex cosine function? A simple basic example of this is the complex gamma function. Let’s look a little deeper at the complex gamma functions. The complex gamma function can be represented as where the complex gamma is a real number, and hence we know the complex gamma has a real number. For example, if the real check my blog 10 is the complex number 10, then the complex gamma can be represented by This means So we know that the real number 1 is 10. Now let’s turn to the complex gamma as a function of the complex variable Z.
City Colleges Of Chicago Online Classes
We know that the complex gamma in the complex plane is where Z is the complex unit vector with respect to the complex plane. Further, since the complex gamma lies in the complex half-plane, Z = 4, and the complex gamma extends to infinity, we can write So the complex gamma does not have a real number in general. So what does this mean? The answer to that question is that the complex function is just a function of Z. Now when we do the integration of the complex gamma, we have the complex gamma going from 0 to 1. Thus the complex gamma we have is given by because we know that now is when we evaluate the real part of the complex function. So since the complex function has only real values, we want to find that it is real. Here we know that for 0 < z < 1, the complex number 0 is a real. So the complex gamma must be real. So we have Since the complex gamma goes from 0 to 2, it means that its complex part is Which means that the complex part of the function is real. So we know that it is also real. Now let us work out the complex function as a function. We have Now we have the real part which is Now the real part is 0. But the complex part is the complex part. So in this case the complex part must be real, as we have shown. What we have now is the real part, which we have seen before. This is a complex gamma function with real parts. But what is the real function? The real part of a complex gamma is given by the following expression: We can find the complex gamma by finding the complex part by doing a real part evaluation. Okay, so we have the function So now we have moved here whole complex gamma. Or now we have a real part of our complex gamma. So we have What is the complex cosine function? Cosine functions are not only commonly called complex forms of function but also forms of other types of physical observables such as the so-called discrete cosines or the so- called complex-valued functions.
Do My College Algebra Homework
Cosines Cosines are the operators that make up a function. The complex cosine is defined as the following operator theorem, which has been proved by many mathematicians around the world: Cosmics Cosmic rays and photons are manifestations of the Earth’s gravitational field. Cosmic photons are the light that is emitted by the Earth’s internal gravity and can be sent to the Sun as a result of radiation from a sun. Cosmological constant is the cosine of a cosine of Einstein’s equations, which are the equations of the Einstein field equations. Cosmas is the cosmological constant as defined in the physical theory of general relativity. Cosms are the functions that make up the Eulerian motion of the Earth. Cosmega-1 is the cosmas of a cosmological parameter. Cosmos is the fractional cosmological scale factor. Cosmo is the cosmos of a cosmology. Geometries Cosmol is the cosmol of the mathematical formalism of cosmology, which is a special form of the Einstein’s field equation, which is also called Einstein’s field equations. The cosmol is defined as a special form for the cosmography of the earth, where the cosmol is used to describe the way in which the earth is located. Current cosmographs Cosmol is a book about the Euler’s field equation of the Earth and the cosmol has been published by the University Press of America. See also Cosmology References External links Category:Geometry of space and time Category:Cosmography Category:Mathematical methods Category:Quantum mechanics |
Graph a Periodic Function Sketch the graph of y= sin θ for 0° ≤θ ≤ 360° or 0 ≤ θ ≤ 2π. Introduction of Function Inverses. •identify a particular point which is on the graph of every logarithm function, •understand the relationship between the exponential function f(x) = ex and the natural logarithm function f(x) = lnx. 5-1 Identify Linear Functions and Their Graphs Hill 1 Hill 2 x 47. 5 Library of Functions; Piecewise-Defined Functions. Rational Functions Test Review (2015) Solutions (2015). Graph Function. The graph of a function is the set of ordered pairs consisting of an input and the corresponding output. 3 More on Functions and Their Graphs Section I. The student will be able to identify domain and range of a function. 3 Real Zeros of Polynomial Functions 2. 2 Rules of Exponents 11 2. Exercises Find the domain of each function, using interval notation for the answer. Precalculus starts off with several lessons familiar to students coming from a college algebra course—reviewing topics like linear inequalities, quadratic equations, and inverses of functions—before covering more advanced topics to prepare them for calculus. 2 Explain why effective supervisors should have a variety of skills. PreCalculus Chapter 1 Functions and Their Graphs. Preface This book is a modi ed version of the Open Source Precalculus Project initiated by Carl Stitz and Je Seager. Create the worksheets you need with Infinite Algebra 2. Quadratic Functions - Lesson 1. 258&260) Today we are going to work with transformations of exponential functions. To cover the answer again, click "Refresh" ("Reload"). Chapter 2 2. Precalculus: Functions and Their Graphs Homework Answers - Free download as PDF File (. of the common functions of the graphing calculator most widely used in high school classrooms. 3 and Lesson 2. What Makes a Question Essential? Teachers regularly pose questions to their students, but the purpose and form of these questions can vary widely. 2 Slope and Rate of Change 2. Histograms 3. Packet 3-1. 1 Linear Equations in Two Variables 109 You should know the following important facts about lines. Prepare online for ECAT Mathematics, ECAT Mathematics online mcq test with answers pdf, ECAT Chapter 13(Trigonometric Functions and Their Graphs). Chapter 2: The Managerial Functions After studying this chapter,you will be able to: 1 Summarize the difficulties supervisors face in fulfilling managerial roles. 4 Modeling with Quadratic Functions Meteorologist (p. MathScore EduFighter is one of the best math games on the Internet today. With one-dimensional motion,. Questions and Answers on Functions. Graph rational functions. 112A Chapter 5. Chapter Ten 397 Chapter Ten: Data Analysis, Statistics, and Probability Mastery Unlike other standardized admissions tests, you do not have to remember a copious number of rules and endless amounts of material for the SAT. Vertex Form of the Quadratic Function. Power, Polynomial, and Rational Functions 84E | Chapter 2 | Power, Polynomial, and Rational Functions Vertical Alignment Lesson-by-Lesson Preview 2-1 Power and Radical Functions Functions of the form f (x ) = a x n, where a and n are constant real numbers, are power functions. 7b Transform radical functions by changing parameters CC. Chapter 8; 2 Chapter Sections 8. 4 Sketching Graphs of Functions 51. In Chapter 2, I asked why popcorn is sold at a higher price in movie theaters than elsewhere. We graphed these equations, found slope and even wrote some equations by the end of the block. Precalculus: Functions and Their Graphs Homework Answers - Free download as PDF File (. " So, what makes a question "essential"?. 3 Continuity • 2. 5 Solving Linear Inequalities 27 2. Trigonometry functions of large and/or negative angles. CHAPTER 1 Functions and Their Graphs Section 1. Bar Charts 6. CHAPTER 2 Differential Calculus of Functions of One Variable IN THIS CHAPTER we studythe differentialcalculus of functionsofone variable. This banner text can have markup. Exploring Properties of Parabolas An axis of symmetry is a line that divides a parabola. 112A Chapter 5. Families of functions are delineated by what operators are used on the dependent variable. Functions describe situations where one quantity determines another. Interesting Graphs - A few equations to graph that have interesting (and hidden) features. These graphs. 3 and Lesson 2. save Save Chapter2 - Problems of Chapter 2 For Discuss this discontinuities of this function and their Sketch the graph of the function ( ) 2 2 1 1 1 1 1 x. The function is decreasing on the interval increasing on the interval (0, 2), and decreasing on the interval b. Class Notes. Precalculus: Functions and Graphs 12th edition Your students can use chapter and section assessments to gauge their mastery of the material and generate. We discuss functions and functional forms, Answers to Discussion Questions. Worksheets can be installed as a testing tool to look for the Scholastic Aptitude and Mental Aptitude of child during admission procedures. We can obtain a second point by choosing a value for x and finding the corresponding value for y. 1 Honors Pre-calculus Midterm Review Name:_____ Chapter 1: Functions and Their Graphs 1. Feel free to download and enjoy these free worksheets on functions and relations. It is expected that students will: Interpret and explain the relationships among data, graphs and situations. Now is the time to redefine your true self using Slader’s free Blitzer Algebra and Trigonometry answers. The students know how to do the math, they just don't understand what the question is asking. 9 Graphs of Functions. 3-1: Inequalities and Their Graphs: Exercises: p. 3 Properties of Functions. Do the curves ever intersect? (b) Use a graphics calculator to sketch the function. 2 Chapter 1 Functions and Their Graphs Introduction to Library of Parent Functions In Chapter 1, you will be introduced to the concept of a function. 44 Chapter 2 2 Quadratic Functions 2. 8 Absolute Value Functions. The Alberta tar sands has become increasingly controversial since Suncor’s first extraction plant opened in 1967. Graphs and Trees Section 7. 1 2-2 4 0 -3 Graph X 1-2 0 Y 2 4-3 Mapping An ordered pair is given as (x-coordinate, y-coordinate). Teaching Suggestions Have students study the. Should you need guidance on fractions or maybe graphing linear, Factoring-polynomials. 1 polynomial functions and their graphs - 3. 1: Box Graphs of Rational Functions reproduction permission allows you to copy pages from More Practice Your Skills with Answers for. Annotated Notes 2. pdf doc ; Functions - Properties of functions and the Rule of Four (equations, tables, graphs, and words). Follow these easy steps to easily sketch transformations of any parent. When you graphed straight lines, you only needed two points to graph your line, though you generally plotted three or more points just. (Lesson 1. Identify vertical asymptotes. Answers to practice exercises can be found on pages 114-118. Answers for Algebra 1, Practice Book Lesson 5-1, page 112. 3 Rational Functions and Their Graphs - Multiplying and dividing rational expressions #2 - Multiplying and dividing rational expressions #3 Online Practice. 3 Solving Linear Equations 16 2. Example 1) If , find and simplify each expression: a. To cover the answer again, click "Refresh" ("Reload"). 1 Lines in the Plane 2 You should know the following important facts about lines. 5 Inverse Function 43. (a) f(I) (b) lim f(x 6 (c) f(4) (d) lim f(x) 2 -1+123 4 5 6. Chapter 4 Quadratic Functions. 1 Functions and Their Graphs 6 Note. 4 Shifting,. rnctice ' Form K. functions y = sin θ or y = cos θ begins to repeat is 2π. Today we started working in Chapter 2: Functions and Their Graphs. 11th class Mathematics Chapter 11 Trigonometric Functions and their Graphs online MCQ's Test. Thus, (0, 2) and (0, -2) are on the graph. 3 The Tangent Function Graphing the Tangent Function: Amplitude, Period, Phase Shift & Vertical Shift. 4 Slope and Rate of Change 1. This banner text can have markup. Free Algebra 2 worksheets (pdfs) with answer keys-each includes visual aides, model problems, exploratory activities, practice problems, and an online component. From the equation yx=−23, we see that the y-intercept is −3. Chapter 2 LINEAR FUNCTIONS 24. 237 #1 -19 (skip 17). The graph is increasing in a straight line from left to right because for each mile the cost goes. Example 1) If , find and simplify each expression: a. Unit 7B – Chapter 5 – Polynomials. Contents 1. 258&260) Today we are going to work with transformations of exponential functions. Graph to check your answer. Chapter 3: Transformations of Graph…and all that! Clever ways to sketch quadratics. 1 Functions and Their Graphs 2. In NCERT Solutions for Class 11 maths chapter 2 relations and functions, we learn about ordered pair, cartesian product of sets, relations, representation of a relation, function as a special kind of relation, function as a correspondence, equal functions, real functions, domain of real functions, some standard real functions and their graphs. Graph quadratic functions using x-intercepts. The most basic quadratic is y = x 2. B 2 x 2 + y = 5 2 x 2 + y = 5. College Algebra Version bˇc Corrected Edition by Carl Stitz, Ph. 3 Trigonometric Functions Defined on Right Triangles Section 4. CHAPTER 5 Functions and Their Representations 135 Chapter 2 Direct VariationMathematical Model CONTENTS How Is Mathematics Related to Bungee Jumping? 35 Lesson 2. The graph of y =f (x) is shown below. 2-1 Functions. 2: The Derivative (in-class notes) Here is a good applet that illustrates the relationship between the graph of f(x) and f'(x). That being said, I realize some teachers are busy and some are teaching outside of their subject expertise. You can skip questions if you would like and come back to. functions y = sin θ or y = cos θ begins to repeat is 2π. 4 Rates of Change and Tangent Lines [CR1a] — The course is structured around the enduring understandings within Big Idea 1: Limits. Chapter 1: Notes : Answers to Problem Sets : Review Problems : Relations & Functions (2-1) Problem Set 2-1: Video Lesson and Practice: Linear Equations (2-2) Problem Set 2-2: Video Lesson and Practice : Video Lesson and Practice: Using Linear Models (2-4) Problem Set 2-4 : Best Fit Lines : Absolute Value Graphs (2-5); Translations (2-6) Problem. c) The function y = 1 x2. 2 Investigation: Proportional Relationships 39 Lesson 2. 6 Chapter 4-2 - FUNCTION Notation (Day 2) SWBAT: Evaluate Functions Warm - Up: Determine the domain and range of the relation below. We begin this chapter by René Descartes explain-ing his work to Queen Christina of Sweden. 2: Slope of a Line: Ext. Consider the graph of y = e^x (a) Find the equation of the graph that results from reflecting about the line y = 5. If you like this Site about Solving Math Problems, please let Google know by clicking the +1 button. Thus, the equation represents a function. Answer to Graphing Systems of Inequalities Use a graphing calculator to graph the solution of the system of inequalities. 4 Solving Quadratic Equations 24 2. Chapters 2. However, since quadratics graph as curvy lines (called "parabolas"), rather than the straight lines generated by linear equations, there are some additional considerations. B 2 x 2 + y = 5 2 x 2 + y = 5. Graphs of Basic Functions There are six basic functions that we are going to explore in this section. This is not a function, since a distinct x-value corresponds to two different y-values. In which graph are the values of y a function of the values of x? To see the answer, pass your mouse over the colored area. 1 Page 342 Question 1. The student will be able to give and apply the definition of a function. The graph of is a straight line. 7 Piecewise Functions 2. Com Part 1 Book 1 Business Mathematics Chapter 4 Functions and Their Graphs. Worksheet 1. Which graph best illustrates the market for coffee after severe weather destroys a large portion of the coffee crop? A) graph A. Chapter 2: Graphing and Writing Linear Equations: 2. Free PDF download of RD Sharma Solutions for Class 11 Maths Chapter 6 - Graphs of Trigonometric Functions solved by Expert Mathematics Teachers on Vedantu. The domain of f is the set of all real numbers, while the range of f is the set of all positive real numbers. Algebra 2 - 2nd Edition Chapter 10: Trigonometric Functions and Identities. ISBN List • Table of Contents. College Algebra Version bˇc Corrected Edition by Carl Stitz, Ph. Welcome to Big Ideas Math! Let's get you registered. Graphs of Functions. 1 What is a Function? Review section 0. 1 Exponential Functions and Their Graphs In this section you will learn to: • evaluate exponential functions • graph exponential functions • use transformations to graph exponential functions • use compound interest formulas An exponential function f with base b is defined by. High School: Functions » Introduction Print this page. Graphing Exponential Functions Answer Key Algebra 2 Download or Read Online eBook graphing exponential functions answer key algebra 2 in PDF Format From The Best User Guide Database led us to introduce them in the algebra and functions strand ahead of the more To use exponential rules and graphing calculators or computer software to. After Summer Assessment Test. Graphing absolute value functions Graphing linear inequalities. 7) Student Skills in Chapter 2: Understand what means to be a solution of a linear equation, graphically and algebraically. Graph 31 2 x y x − = +. Trigonometry functions of large and/or negative angles. 2: The Derivative (preprinted) Chapter 2. Chapter 2: Functions and Their Graphs 76 2. 1 The Cartesian Coordinate System De nition 1. Really clear math lessons (pre-algebra, algebra, precalculus), cool math games, online graphing calculators, geometry art, fractals, polyhedra, parents and teachers areas too. Advanced Functions Nelson. 𝑥𝑥, then x = 2y - 20. 6 Graphs of Other Trigonometric Functions Problem Recognition Exercises Comparing Graphical. Graphing Inequalities 2 RTF Graphing Inequalities 2 PDF View. save Save Chapter2 - Problems of Chapter 2 For Discuss this discontinuities of this function and their Sketch the graph of the function ( ) 2 2 1 1 1 1 1 x. Chapter 7 Exponential Functions Section 7. (Photo courtesy of Dru Oja Jay, Dominion. For example, a point Q, can be designated as (6, 7). Request an Instructor Review Copy. Content Answer U1 OC2 (a) 2 C CN A4 3 3 interpret composite functions 2. Tying it all Together. Graph yx x=−+2342. Chapter 2 Functions and Graphs 2. Graph y = x3. 7 – Analyzing Graphs of Functions and Piecewise-Defined Functions. 2 (Polynomial Functions and Their Graphs) follows the definition of a polynomial function with examples of both polynomial functions and non-polynomial functions. A power function is also a type of monomial function. Sunday, September 2 In Class: Welcome! Syllabus, Information Booklet, IB Syllabus, Website, Workbook, etc. 8) 24+5i must be a zero. 7 Piecewise Functions 2. com, sent a single list-serve announcement of his new website and the free calculus materials available for download. Chapter 2 Test. 6: Calculate and interpret!the average rate of change!of a function (presented symbolically or as a table) over a specified interval. Prentice Hall Algebra 1 Form G Answers. 6 Linear Inequalities in Two Variables 2. Graphing Inequalities Workheet 2 - Here is a 15 problem worksheet where students will graph simple inequalities like “x < -2″ and “-x > 2″ on a number line. Graphs of Basic Functions There are six basic functions that we are going to explore in this section. 1 Linear Equations in Two Variables 109 You should know the following important facts about lines. Chapter 2 Functions and Graphs 2. If you're messy, you'll often make extra work for yourself, and you'll frequently get the wrong answer. NCERT Solutions Class 11 Maths Chapter 2 Relations And Functions provided here are in an easy to understand way. Graphs of Functions. Marchand/2001 7 Review Exercises: Graph and state the domain and range for the following functions. Graphing Distributions A. 7 Chapter 2: Graphs, Equations, and Inequalities Carnegie Learning Algebra I, A Florida Standards Program Chapter 1: Quantities and Relationships This chapter introduces students to the concept of functions. 7 Linear Models Draw a scatterplot by hand and on a calculator. 1 angle measure - 6. com is a moderated chat forum that provides interactive calculus help, calculus solutions, college algebra solutions, precalculus solutions and more. Each one has model problems worked out step by step, practice problems, as well as challenge questions at the sheets end. Increasing, Decreasing and Constant. 3 we will have a method to determine the increasing/decreasing properties of a function and then use these properties to create a graph. Erdman E-mail address: [email protected] Answer Key Send Me A Message; Graph Paper Links Chapter 1: Functions and their Graphs Chapter 1 Summary: Chapter 1. 8) 24+5i must be a zero. The graph of a function. Learn graphs functions chapter 2 with free interactive flashcards. Course Outline Limits and Continuity (Chapter 2) [CR1a]: • 2. Begin with five sheets of plain 8" 1 2 by 11" paper. (a) The slope (steepness) is m. Use arrow notation. 1 – Functions Objectives: The student will be able to do point-by-point plotting of equations in two variables. Chapter 4: Polynomials and their Graphs – Definitions and graphs of polynomials in factored form. 4 (Radicals, page 38), section 0. y''' = 60 x2 Derivatives of Inverse Functions If y = ƒ( x) and x = ƒ-1 (y) are differentiable inverse functions, then their derivatives are reciprocals: dx dy dy dx = 1 Logarithmic Differentiation It is often advantageous to use logarithms to differentiate certain functions. The base is greater than 0 and the independent variable is the exponent. The midpoint of the given line segment is The midpoint between and is The midpoint between and is Thus, the three points are. com ISBN: 978-0-02-111968-4 MHID: 0-02-111968-6 Homework Practice and Problem-Solving Practice Workbook Contents Include: • 119 Homework Practice worksheets-. 77) SEE the Big Idea Electricity-Generating Dish (p. I am doing a packet that my teacher gave me and I just finished chapter 9 of the Prentice Hall Aglebra workbook I think? Something like that 9-1 is Quadtractic Graphs and their Properties, 9-2 is Quadratic Functions, 9-3 is Solving Quadratic equations and so on. The graph is increasing in a straight line from left to right because for each mile the cost goes. Vertical Translation a. And for it to be a function for any member of the domain, you have to know what it's going to map to. All Relations and Functions Exercise Questions with Solutions to help you to revise complete Syllabus and Score More marks. Create the worksheets you need with Infinite Algebra 2. Although the function's equations are not given, the graph indicates that the function is defined in two pieces. 1 Functions and Their Graphs 69 To graph an equation in two variables, follow these steps: STEP Construct a table of values. org/math/algebra2/functions_and_graphs/function-introduction/v/linear-function-graphs?. This banner text can have markup. Identify vertical asymptotes. College is a far cry from high school when students receive progress reports halfway into each quarter, letting them know what they owe and what they must do to get back on track. Exercises Find the domain of each function, using interval notation for the answer. 5 Domain and Range Section 2 Functions Section 3 Graphs of Functions Section 4 Shifting, Reflecting, and Stretching Graphs Section 5 Combinations of Functions Section 6 Inverse Functions Section 7 Linear Models and Scatter Plots Vocabulary. 3 shows four different markets with changes in either the supply curve or the demand curve. linear function 3 2 1 EXAMPLE 4 graph dependent variable independent variable. 2 Calculus Without Limits The next page is going to reveal one of the key ideas behind calculus. 5 Correlation and Best-Fitting Lines 2. 4 The modulus of a number 42 3. 2 Functions and 2. Algebra I Module 1: Relationships Between Quantities and Reasoning with Equations and Their Graphs. -3) and parallel to y= 6x- 2. Consider the graph of y = e^x (a) Find the equation of the graph that results from reflecting about the line y = 5. Students shade the double-bar graph each day. You should know formulas for compound interest. We begin this chapter by René Descartes explain-ing his work to Queen Christina of Sweden. Linear Models; Functions; Graphs of Functions; Slope and Rate of Change; Linear Functions; Linear Regression; Chapter Summary and Review; Projects for Chapter 1; 2 Modeling with Functions. Transformations. We will be using a screen produced by a TI 84+ showing an original function drawn in a thick line and some other functions that were produced from that original function. MarkLogic is the only Enterprise NoSQL Database. We will graph the function and state the domain and range of each function. Since a > 0, the parabola opens upward. 7b Transform radical functions by changing parameters CC. 2 Limits Involving Infinity • 2. Then the surface area of the cuboid would be 2*(6x^2 + 8x^2 + 12x^2) = 52*(2. 2 Polynomial Functions of Higher Degree 2. Algebra II/Trig - Honors Classroom Resources Course Expectations and Syllabus (9/5/2012) Student Information Sheet (Please save first, fill out, print, and bring to class. Chapter 2 Functions and Graphs 2. There is a new example on the Leading Coefficient Test where a function's degree and leading coefficient are determined for an. 23 1xy Solve for y: 22 22 2 2 2 23 1. Chapter 2 Limits and Their Properties Analysis In Exere In Exercises 45-50, find the limit of the function (if it exists) Write a simpler function that agrees with the given function at a but one point. The set of all first components is called the domain of the relation and the set of all second components is called the range of the relation. Which of the graphs above are possible for the graph of a 6th degree polynomial function with a leading coefficient of -2? 43. 4 Library of Functions; Piecewise-defined Functions. 1 the unit circle - 5. Create the worksheets you need with Infinite Algebra 2. y''' = 60 x2 Derivatives of Inverse Functions If y = ƒ( x) and x = ƒ-1 (y) are differentiable inverse functions, then their derivatives are reciprocals: dx dy dy dx = 1 Logarithmic Differentiation It is often advantageous to use logarithms to differentiate certain functions. Notice that g x( ) rises even more steeply than f x( ). 3 The Tangent Function Graphing the Tangent Function: Amplitude, Period, Phase Shift & Vertical Shift. Proceeds go to support the creation of more resources and site hosting. Chapter 2. Thus, the equation represents a function. Annotated Notes 2. Functions and Their Graphs 2. 3 Working with indices 23 2. 2: Functions and Graphs Graphing a Function Additional Properties of Functions and Their Graphs. Sunday, September 2 In Class: Welcome! Syllabus, Information Booklet, IB Syllabus, Website, Workbook, etc. 4 Writing Equations of Lines 2. 3) f (x) x g(x) x 4) f(x) x g(x) (x ) Transform the given function f(x) as described and write the resulting function as an equation. Chapter 2: Economic Theories, Data, and Graphs _____ This chapter provides an introduction to the methods economists use in their research. If , find and use it to find an equation of the tangent line to the curve at the point. (Answers for Chapter 2: Polynomial and Rational Functions) A. Interpreting Graphs Answer Key at Teachers Pay Teachers. 3 and Lesson 2. The set of all first components is called the domain of the relation and the set of all second components is called the range of the relation. Graphs of polynomial functions by graphing a polynomial that shows comprehension of how multiplicity and end behavior affect the graph. 1 Graphs and Graphing Utilities Section I. We begin this chapter by René Descartes explain-ing his work to Queen Christina of Sweden. Graph 31 2 x y x − = +. 1 Functions. Annotated Notes 2. 1 Basics of Functions and their Graphs Definition of Relations: A relation is any set of ordered pairs. MCF 3M1 – FUNCTIONS and Applications EXAM & CULMINATING REVIEW. Com Part 1 Book 1 Business Mathematics Chapter 4 Functions and Their Graphs. 7 Chapter Summary and Review 1. Online Test Chapter 11 Trigonometric Functions and their Graphs. Online Resources: PreCalculus Springboard Text. Lesson 9-4 Rational Expressions. Use the zeros of the function and the end behavior of the function. Chapter 1: Notes : Answers to Problem Sets : Review Problems : Relations & Functions (2-1) Problem Set 2-1: Video Lesson and Practice: Linear Equations (2-2) Problem Set 2-2: Video Lesson and Practice : Video Lesson and Practice: Using Linear Models (2-4) Problem Set 2-4 : Best Fit Lines : Absolute Value Graphs (2-5); Translations (2-6) Problem. 2-2 Graphs of Relations and Functions. Erdman E-mail address: [email protected] 2 Example 2 Graphs of In the same coordinate plane, sketch the graph of each function by hand. 11-1 Graph of the Sine Function 11-2 Graph of the Cosine Function 11-3 Amplitude,Period,and Phase Shift 11-4 Writing the Equation of a Sine or Cosine Graph 11-5 Graph of the Tangent Function 11-6 Graphs of the Reciprocal Functions 11-7 Graphs of Inverse Trigonometric Functions 11-8 Sketching Trigonometric Graphs Chapter Summary Vocabulary. 1 Characteristics of Exponential Functions Section 7. Precalculus: A Functional Approach to Graphing and Problem Solving prepares students for the concepts and applications they will encounter in future calculus courses. Functions and Their Graphs; Polynomial and Rational Functions Polynomial Functions and Their Graphs (3. 6 Rational Functions and Asymptotes 2. Lesson 9-3 Rational Functions and Their Graphs. 4 Complex Numbers 2. 1 introduces the concept of function and discusses arithmetic operations on functions,limits, one-sidedlimits, limitsat ˙1, and monotonicfunctions. b) The function y = 6x is an exponential function. 6: The Chain Rule. 1 polynomial functions and their graphs - 3. Chapter 2 Functions and Graphs 2. Thus, (0, 2) and (0, -2) are on the graph. Identify horizontal asymptotes. 19 Multiple-Choice Problems on Derivatives 3. Graphing linear equations is pretty simple, but you'll reliably get correct answers (that is, you'll reliably draw good graphs) only if you do your work neatly. The graph is increasing in a straight line from left to right because for each mile the cost goes. Solve real-life problems. 29 Functions and their Graphs The concept of a function was introduced and studied in Section 7 of these notes. 1 - Functions Objectives: The student will be able to do point-by-point plotting of equations in two variables. |
Abstract: This article uses quantile regression and other methods to construct a Bitcoin volume and price indicator, and briefly describes its application scenarios. A deeper understanding of the Bitcoin market can provide more references for regulators.
People analyze Bitcoin’s trading volume and price changes in order to predict its future price. Although we believe that the price of Bitcoin at the next moment is difficult to predict, through its volume-price relationship, we can still obtain some information from it and make a rough estimate of its price at a particular moment.
1. A special case of the relationship between volume and price
When analyzing the relationship between volume and price, there is a special case-the transaction volume is large, but the price does not change much. For the general volatile market, this price balance provides 3 pieces of information that may be correct:
1. There is a difference between the main force of the long and the short, and the price is considered to be the current bottom or top;
2. The amount of capital that the main players of longs and shorts are willing to invest for this price;
3. To break this balance, long or short should invest more money than this.
Second, the principle of the index
After the emergence of this special situation, the bulls and the bears completed the first round of the game. After the “ammunition” is exhausted, the next round of evaluation, trial and game may be carried out. This generally takes time, so two rounds The possibility of a period of time between is higher. This is similar to that after an earthquake occurs in a certain place, the deformed rock will quickly release the accumulated elastic potential energy through rupture. Generally, it will take a long time before the next earthquake of the same degree occurs.
Therefore, Bitcoin may briefly fluctuate around this price for a short period of time, which is the approximate price of the next moment. However, this situation only indicates that the possibility of drastic price changes in the next moment is small, but this possibility still exists. Therefore, the volume and price index established accordingly is mainly used as an auxiliary indicator, and it must be combined with other indicators in actual operation. Use together.
It should be noted that “the volume is large, but the price does not change much” This situation may be reflected on one K-pillar, or it may be reflected on several consecutive K-pillars-long and short After a long period of game, the price fluctuates sharply, but the final price change is still small (the opening price of the first one of these consecutive K-pillars is close to the closing price of the last one), when the indicator is realized Both cases should be fully considered.
Then, how much volume and how small price changes can be included in the judgment of the indicator? In a light or hot market, this standard should be different, and should be set flexibly in consideration of recent general trading volume and price changes. We use quantile regression to achieve this.
3. Introduction to Quantile Regression -
The main purpose of classical regression is to estimate the mean value of the dependent variable based on the explanatory variable. When the regression hypothesis holds, this method is effective; but when there is a non-standard situation, it will fail. Some data cannot satisfy two key assumptions-the normality assumption and the homogeneity of variance assumption. This is exactly the problem that quantile regression can handle, because it relaxes these assumptions. In addition, quantile regression provides researchers with a new perspective (which cannot be obtained from classical regression) to study the effect of explanatory variables on the position, scale, and shape of the dependent variable distribution.
The idea of quantile regression originated in 1760, when Rudjer Josip Boscovich, a traveling scholar and Croatian Christian, had many titles: physicist, astronomer , Diplomat, philosopher, poet and mathematician-came to London to teach his immature median regression method.
Keenke and Basett (1978) proposed a more general model than the median regression model—quantile regression model (QRM).
Fourth, a Bitcoin volume and price indicator-using quantile regression
1. Construct “K-pillar summary data”
Summarize the K-pillar data of several mainstream spot exchanges per minute (for this article, Binance, Gemini, Huobi and OKEx are used temporarily). Among them, the opening price, closing price, highest price and lowest price of each K-pillar are the average of the corresponding values of the K-pillars of these exchanges at the same time, and the trading volume of each K-pillar is the K-pillars of these exchanges at the same time. The sum of volume. In this way, K-pillar data that can initially reflect the overall situation of the market is constructed.
Only the most recent 120 K-bar data is used here.
2. Exclude K-pillars with small trading volume
The indicator value of this volume and price indicator comes from the above “K-pillar summary data”. Since we are considering the situation that “the volume is large, but the price does not change much”, when selecting indicator values, we should exclude the K-pillars with small volume and only keep the ones with large volume.
For example, the current trading volume of each K-pillar is between 20-1000 coins, so obviously most of the K-pillars with a trading volume of 20-50 coins can be excluded, and the indicator value should not be generated by it.
3. Constructing “volume-price difference” data
In order to use fractional regression, we need to construct the “volume-price difference” data in the K-pillar summary data.
The volume data of each K-pillar is calculated as described above, and the price difference data is the absolute value of the closing price of the K-pillar minus the opening price.
If the “volume is large, but the price does not change much” occurs on several consecutive K-pillars, then the “price difference” is the opening price of the first of these consecutive K-pillars and The absolute value of the difference between the closing price of the last one.
4. Use “volume-price difference” data to perform quantile regression
For the results of quantile regression, we only choose the most recent time, which may have an impact on the subsequent market.
The figure above shows the results of a certain quantile regression. For the convenience of observation, we only draw regression lines for the 0.05th quantile, the 0.25th quantile, the 0.5th quantile, and the 0.75th quantile.
Generally speaking, the greater the trading volume, the greater the possibility of price fluctuations, and more points (data) that fit this situation will appear on the upper right of the graph. The point at the bottom right of the graph (if any) has smaller price fluctuations in the overall data, and larger trading volume-quantile regression plays such a screening role.
The data with large volume under the 0.05th quantile regression line is represented by red dots as the indicator value. Its meaning is that in the latest 120 K-bars at most, there is a point (data) where the trading volume is large enough and the price movement is small enough. It is one of the data that accounts for less than 5% of the total data.
The indicator value shown in the figure above is actually formed based on 3 consecutive K-pillars from 03:59-04:01 on October 30, 2021.10.30, Beijing time. The total trading volume of these 3 K-pillars was 523.5 coins, but only caused a price fluctuation of about $3.4. This shows the three-minute game between the bulls and the bears. The opening price of this 3 minutes is 62392.47 US dollars and the closing price is 62389.04 US dollars. Since then, the trading volume of each K-pillar has remained at a low level, and the price of Bitcoin fluctuates slightly around $62,300-62,500. This indicator value can be visually drawn as follows:
Let us give another example. The indicator value shown in the figure below is formed based on 4 K-pillars. The total trading volume of these 4 K-pillars is 1646.6 coins, but only caused a price fluctuation of about 14.7 US dollars. Since then, the trading volume of each K-pillar has remained at a relatively low level. In the following 12 minutes, the price fluctuated slightly around US$61,000-61200. And the farther away from the large transaction indicated by the indicator value, the greater the possibility of a large price change.
Five, the use of this volume and price index
The more volatile market shown in the above figure occurred at 2:00 Beijing time 2021.11.4 (18:00 UTC). The Bitcoin perpetual futures contract on a certain exchange experienced price fluctuations of nearly $2,000. At that time, 4 companies traded The total spot transaction volume is about 2682.6 coins. Although this can be understood as the impact of the market on the Fed’s announcement of the November Monetary Policy Committee resolution at that point in time-the Fed will officially start the Taper process as expected by the market while maintaining the policy interest rate unchanged. Reduce the rate of bond purchases by 15 billion US dollars, but such market conditions are not uncommon when there is no relevant news. In similar market conditions, it will be difficult to execute trading instructions at the expected price, and there are huge uncontrollable risks in placing orders, opening positions, and closing positions. Therefore, for important strategies, we expect to be executed in a relatively stable market.
However, if there is an indicator value for the volume and price indicators described in this article, it means that “the volume is large, but the price does not change much”, indicating that the main force of the long and short positions may have just gone through a round of game, and the market revolves around the corresponding The price temporarily reaches the equilibrium state of supply and demand, which provides a short-term and rough price estimation stage-the subsequent decline in trading volume, small fluctuations around the corresponding price, and a greater probability of stable market conditions. However, as time goes by, the impact of market balance will gradually weaken, so the information revealed by the indicator is time-sensitive.
Bitcoin’s market is relatively stable most of the time, and if there is an indicator value for this volume and price indicator, there is a high probability that the subsequent market will be stable. There are multiple benefits in this way. For example, the operations of placing orders, opening positions, and closing positions mentioned above can avoid the risks caused by drastic changes in prices. At the same time, in a stable market, trading instructions can also be smoothly executed by the exchange, avoiding the risk of the exchange being overloaded and unable to execute instructions under extreme market conditions. In addition, the main force can also learn the volume level of the market balance from this, and can try to increase the investment for testing, so as to break the current balance and make the price develop in a certain direction.
This article discusses the situation that “the volume is large, but the price does not change much”, and uses methods such as quantile regression to construct a Bitcoin volume and price indicator. Of course, there are other “intelligent” solutions that can be used to construct this indicator, but quantile regression is easy to understand and quick to calculate, which has obvious advantages.
It needs to be emphasized again that although sometimes there is an indicator value for this volume and price indicator, there is still the possibility of drastic price changes in the next moment, so it should be used as an aid in actual operation and used together with other indicators.
Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/a-bitcoin-volume-and-price-index-application-of-quantile-regression/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks. |
Amplification arguments for large sieve inequalities
We give a new proof of the arithmetic large sieve inequality based on an amplification argument, and use a similar method to prove a new sieve inequality for classical holomorphic cusp forms. A sample application of the latter is also given.
Key words and phrases:Large sieve inequality, modular form, amplification
2000 Mathematics Subject Classification:Primary 11N35, 11F11
1. The classical large sieve
The classical arithmetic large sieve inequality states that, for any real numbers , , any choice of subsets for primes , we have
and is any constant for which the “harmonic” large sieve inequality holds: for any complex numbers , we have
the notation and denoting, respectively, a sum over squarefree integers, and one over integers coprime with the (implicit) modulus, which is here.
By work of Montgomery-Vaughan and Selberg, it is known that one can take
(see, e.g., [7, Th. 7.7]).
There are a number of derivations of (1) from (2); for one of the earliest, see [10, Ch. 3]. The most commonly used is probably the argument of Gallagher involving a “submultiplicative” property of some arithmetic function (see, e.g., [8, §2.2] for a very general version).
We will show in this note how to prove (1) quite straightforwardly from the dual version of the harmonic large sieve inequality: is also any constant for which
holds for arbitrary complex numbers . This is of some interest because, quite often,111 But not always – Gallagher’s very short proof, found e.g. in [11, Th. 1, p. 549], proceeds directly. the inequality (2) is proved by duality from (3), and because, in recent generalized versions of the large sieve (see ), it often seems that the analogue of (3) is the most natural inequality to prove – or least, the most easily accessible. So, in some sense, one could dispense entirely with (2) for many applications! In particular, note that both known proofs of the optimal version with proceed by duality.
Note that some ingredients of many previous proofs occur in this new argument. Also, there are other proofs of (1) working directly from the inequality (3) which can be found in the older literature on the large sieve, usually with explicit connections with the Selberg sieve (see the references to papers of Huxley, Kobayashi, Matthews and Motohashi in [11, p. 561]), although none of those that the author has seen seems to give an argument which is exactly identical or as well motivated. Also, traces of this argument appear earlier in some situations involving modular forms, e.g., in . In Section 2, we will use the same method to obtain a new type of sieve inequality for modular forms; in that case, it doesn’t seem possible to adapt easily the classical proofs.
Indeed, maybe the most interesting aspect of our proof is that it is very easy to motivate. It flows very nicely from an attempt to improve the earlier inequality
We will explain this quite leisurely; one could be much more concise and direct (as in Section 2).
be the sifted set; we wish to estimate from above the cardinality of this finite set. From (3), the idea is to find an “amplifier” of those integers remaining in the sifted set, i.e., an expression of the form
which is large (in some sense) when . Then an estimate for follows from the usual Chebychev-type manoeuvre.
To construct the amplifier , we look first at a single prime . If , we have . If we expand the characteristic function of in terms of additive characters,222 We use this specific basis to use (3), but any orthonormal basis containing the constant function would do the job, as in . we have then
and the point is that the contribution of the constant function (-th harmonic) is, indeed, relatively “large”, because it is
and exactly reflects the probability of a random element being in . Thus for , we have
For , the size of the amplifier is
by (5), while on the other hand, by applying the Parseval identity in , we get
So we obtain
i.e., exactly Rényi’s inequality (4), by this technique.
To go further, we must exploit all the squarefree integers (and not only the primes) to construct the amplifier. This is most easily described using the Chinese Remainder Theorem to write
and putting together the amplifiers modulo primes : if then for all , and hence multiplying out (5) over , we find constants , defined for (because is defined for coprime with ), such that
Moreover, because the product decomposition of the Chinese Remainder Theorem is compatible with the Hilbert space structure involved, we have
Arguing as before, we obtain from (3) – using all squarefree moduli this time – that
This is not quite (1), but we have some flexibility to choose another amplifier, namely, notice that this expression is not homogeneous if we multiply the coefficients by scalars independent of , and we can use this to find a better inequality. Precisely, let
where are arbitrary real coefficients.
Then we have the new amplification property
with altered “cost” given by
so that, arguing as before, we get
By homogeneity, the problem is now to minimize a quadratic form (namely ) under a linear constraint given by . This is classical, and is done by Cauchy’s inequality: writing
for ease of notation, we have
with equality if and only if is proportional to
in which case
and we get , hence , which is (1).
(1) The last optimization step is reminiscent of the Selberg sieve (see, e.g., [7, p. 161, 162]). Indeed, it is well known that the Selberg sieve is related to the large sieve, and particularly with the dual inequality (3), as explained in [5, p. 125]. Note however that the coefficients we optimize for, being of an “amplificatory” nature, and different from the coefficents typically sought for in Selberg’s sieve, which are akin to the Möbius function and of a “mollificatory” nature.
(2) The argument does not use any particular feature of the classical sieve, and thus extends immediately to provide a proof of the general large sieve inequality of [8, Prop. 2.3] which is directly based on the dual inequality [8, Lemma 2.8]; readers interested in the formalism of are encouraged to check this.
What are the amplifiers above in some simple situations? In the case – maybe the most important – where we try to count primes, we then take to detect integers free of small primes by sieving, and (5) becomes
if . Then, for squarefree, the associated detector is the identity
if , or in other words, it amounts to the well-known formula
for the values of a Ramanujan sum with coprime arguments. Note that in this case, the optimization process above replaced with
On the other hand, for an example in a large sieve situation, we can take to be the set of squares in . The characteristic function (for odd ) is
with coefficients given – essentially – by Gauss sums
2. Sieving for modular forms
To illustrate the possible usefulness of the proof given in the first section, we use the same technique to prove a new type of large sieve inequality for classical (holomorphic) modular forms. The originality consists in using known inequalities for Fourier coefficients (due to Deshouillers-Iwaniec) as a tool to obtain a sieve where the cusp forms are the objects of interest, i.e., to bound from above the number of cusp forms of a certain type satisfying certain local conditions.
Let be a fixed even integer. For any integer , let be the finite set of primitive holomorphic modular forms of level and weight , with trivial nebentypus (more general settings can be studied, but we restrict to this one for simplicity). We denote by
the Fourier expansion of a form at the cusp at infinity.
We consider on this finite set the “measure” defined by
where is the Petersson inner product. This is the familiar “harmonic weight”, and we denote
the corresponding averaging operator and “probability”, for an arbitrary property referring to the modular forms . (Note that it is only asymptotically that this is a probability measure, as ).
Imitating the notation in [8, Ch. 1], we now denote by
the -th Fourier coefficient maps, which we see as giving “global-to-local” data, similar to reduction maps modulo primes for integers. If is a squarefree integer coprime with , we denote
which we emphasize is a tuple of Fourier coefficients, that should not be mistaken with the single number .
The basic relation with sieve is the following idea: provided is small enough, the become equidistributed as for the product Sato-Tate measure
and this is similar to the equidistribution of arithmetic sequences like the integers or the primes modulo squarefree , and the independence due to the Chinese Remainder Theorem.
The quantitative meaning of this principle is easy to describe if is bounded (independently of ), but requires some care when it grows with . For our purpose, we express it as given by uniform bounds for Weyl-type sums associated with a suitable orthonormal basis of . The latter is easy to construct. Indeed, recall first the standard fact that the Chebychev polynomials , , defined by
form an orthonormal basis of . Then standard arguments show that for and the measure above on , the functions
defined for any -friable integer333 I.e., integer only divisible by primes . , factored as
form an orthonormal basis of . (In particular we have , the constant function .)
We have also the following fact which gives the link between this orthonormal basis and our local data : for any integer coprime with and divisible only by primes , and any , we have
This is simply a reformulation of the Hecke multiplicativity relations between Fourier coefficients of primitive forms.
Our situation is similar to that of classical sieve problems, where (in the framework of ) we have a set (with a finite measure ) and surjective maps with finite target sets , each equipped with a probability density , so that the equidistribution can be measured by the size of the remainders defined by
and the independence by using finite sets , and
and looking at
Here the compact set requires the use of infinitely many functions to describe an orthonormal basis. Another (less striking) difference is that our local information lies in the same set for all primes, whereas classical sieves typically involve reduction modulo primes, which lie in different sets.
We now state the analogue, in this language, of the dual large sieve inequality (3).
With notation as above, for all , all integers , all complex numbers defined for in the set of -friable integers coprime with , we have
where the implied constant depends only on and on the left-hand side is the radical .
Proof of Proposition 1.
This is in fact simply a consequence of one of the well-known large sieve inequalities for Fourier coefficients of cusp forms (as developped by Iwaniec and by Deshouillers–Iwaniec, see ). The point is that because of (9), the left-hand side of (10) can be rewritten
We can now enlarge this by positivity; remarking that
can be seen as a subset of an orthonormal basis of the space of cusp forms of weight and level , and selecting any such basis , we have therefore
where we put if , and where the are the Fourier coefficients, so that
(as earlier for Hecke forms). Now by the large sieve inequality in [7, Theorem 7.26], taking into account the slightly different normalization,444 The case requires adding a factor . we have
with an absolute implied constant, and this leads to (10). ∎
In terms of equidistribution (which are hidden in this proof), the basic statement for an individual prime is that
for all . Such results are quite well-known and follow in this case from the Petersson formula. There is an implicit version already present in Bruggeman’s work (see [1, §4], where it is shown that, on average, “most” Maass forms with Laplace eigenvalue , satisfy the Ramanujan-Petersson conjecture), and the first explicit result goes back to Sarnak , still in the case of Maass forms.555 This is the only result we know that discusses the issue of independence of the coefficients at various primes. Serre and Conrey, Duke and Farmer gave similar statements for holomorphic forms, and Royer described quantitative versions in that case.
We can now derive the analogues of the arithmetic inequality (1) and of Rényi’s inequality (4). The basic “sieve” questions we look at is to bound from above the cardinality (or rather, -measure) of sets of the type
for . Because the expansion of the characteristic function of in terms of Chebychev polynomials involves infinitely many terms, we restrict to a simple type of condition sets of the following type:
is a real-valued polynomial and (the degree is assumed to be the same for all ). Note that is the -average of , so our sets are those where the Fourier coefficients for are “away” from the putative average value according to the Sato-Tate measure.
Denote also by
the variance of .
Then the analogue of (4) is
where the implied constant depends only on , and that of (1) is
where , is arbitrary and
the implied constant depending again only on .
To prove (14), we use the “amplification” method of the previous section. The basic observation is that if, for some prime , we have
then it follows that
Now let , for , be arbitrary auxiliary positive real numbers, and let
for , the product of all primes . If (15) holds for all coprime with , then we find by multiplying out that, for any integer , i.e., such that
and for such , we have
which translates to
where runs over the set of integers of the type
so , and
Cauchy’s inequality shows that , with equality if
and the inequality above, with this choice, leads to
If one tries to adapt, for instance, the standard proof in , one encounters problems because the latter would (naively at least) involve the problematic expansion of a Dirac measure at a fixed in terms of Chebychev polynomials.
Here is an easy application of (14), for illustration (stronger results for that particular problem follow from the inequality of Lau and Wu , as will be explained with other related results in a forthcoming joint work): it is well-known that for , the sequence of real numbers changes sign infinitely often, and there has been some recent interest (see, e.g., the paper of Iwaniec, Kohnen and Sengupta) in giving quantitative bounds on the first sign change. We try instead to show that this first sign-change is quite small on average over (compare with ): fix , and let
(any other combination of signs is permissible). This is a “sifted set”, and we claim that
for any , where the implied constant depends only on and . Since is of size about (for fixed ), this is a non-trivial bound for all . Moreover, to prove this bound, it suffices to show
since we have the well-known upper bound for any (see, e.g., [7, p. 138]).
The sets used in are not exactly in the form (12), so we use some smoothing: we claim there exists a real polynomial of degree such that
Assuming such a polynomial is given, we observe that
for some fixed . Therefore, by (14) with , we get for all that
and the implied constant depends only on . By assumption, we have , and an easy lower bound for follows in the range of interest simply from bounding and using known results on the cardinality of : we have
for any , the implied constant depending only on , the choice of and . This clearly gives the result, and it only remains to exhibit the polynomial . One can check easily that
does the job (see its graph); the numerical values of , and are given by
See the letter of Serre in the Appendix of for previous examples showing how to use limited information towards the Sato-Tate conjecture to prove distribution results for Hecke eigenvalues (of a fixed modular form).
- R. Bruggeman: Fourier coefficients of cusp forms, Invent. math. 45 (1978), 1–18.
- B. Conrey, W. Duke and D. Farmer: The distribution of the eigenvalues of Hecke operators, Acta Arith. 78 (1997), 405–409.
- J-M. Deshouillers and H. Iwaniec: Kloosterman sums and Fourier coefficients of cusp forms, Invent. math. 70 (1982), 220–288.
- W. Duke and E. Kowalski: A problem of Linnik for elliptic curves and mean-value estimates for automorphic representations, with an Appendix by D. Ramakrishnan, Invent math. 139 (2000), 1–39.
- H. Halberstam and H.E. Richert: Sieve methods, London Math. Soc. Monograph, Academic Press (London), 1974
- H. Iwaniec, W. Kohnen and J. Sengupta: The first negative Hecke eigenvalue, International J. Number Theory 3 (2007), 355–363.
- H. Iwaniec and E. Kowalski: Analytic number theory, AMS Colloquium Publ. 53, 2004.
- E. Kowalski: The large sieve and its applications: arithmetic geometry, random walks and discrete groups, Cambridge Tracts in Math. 175, 2008.
- Y.K. Lau and J. Wu: A large sieve inequality of Elliott-Montgomery-Vaughan type for automorphic forms and two applications, International Mathematics Research Notices, Vol. 2008, doi:10.1093/imrn/rmn162.
- H-L. Montgomery: Topics in multiplicative number theory, Lecture Notes Math. 227, Springer-Verlag 1971.
- H-L. Montgomery: The analytic principle of the large sieve, Bull. A.M.S. 84 (1978), 547–567.
- E. Royer: Facteurs -simples de de grande dimension et de grand rang, Bull. Soc. Math. France 128 (2000), 219–248.
- P. Sarnak: Statistical properties of eigenvalues of the Hecke operators, in “Analytic Number Theory and Diophantine Problems” (Stillwater, OK, 1984), Progr. Math. 70, Birkhäuser, 1987, 321–331.
- J-P. Serre: Répartition asymptotique des valeurs propres de l’opérateur de Hecke , J. American Math. Soc. 10 (1997), 75–102.
- F. Shahidi: Symmetric power -functions for , in: “Elliptic curves and related topics”, edited by E. Kishilevsky and M. Ram Murty, CRM Proc. and Lecture Notes 4, 1994, 159–182. |
ENGG2851: Fernando Automotive is a World Famous Automobile Manufacturer in Brazil
Internal Code: 3EEJ Question 1
Fernando automotives is a world famous automobile manufacturer in Brazil. At present, it primarily produces cars and vans. It gets a profit of $5000 from each car and a profit of $18000 from each van. It has a limited supply of steel, plastic and paint, all of which are needed for manufacturing vehicles. It can get a maximum of 1,000,000 tons of steel, 1,200,000 tons of plastic and 800,000 kilolitres of paint each year.
Each car needs 1 ton of steel, 3 tons of plastic and 2 kilolitres of paint to be manufactured. However, each van needs, 5 tons of steel, 4 tons of plastic, and 2 kilolitres of paint. In addition, to maintain its credibility as a dominant player in both car and van manufacturing, the company needs to manufacture at least 100,000 cars and 100,000 vans each year. You, as the planning manager of the company, are asked to use linear programming techniques to determine how many cars and how many vans the company must produce each year to maximize profit.
- Assuming the number of cars manufactured each year is p and number of vans manufactured is q, write down all constraints as inequalities. Make sure to include the three manufacturing constraints, as well as the credibility constraints and the implied non-negativity constraints.
- Express the profit in terms of p and q.
- Use a graphical method to solve the optimization problem, and state how many cars and vans the company should manufacture each year for maximum profit, and what is the corresponding maximum profit. Show all workings.
- Now assume that in the following year, the demand for vans has fallen. Each van now only fetches a profit of $1000, and each car continues to fetch a profit of $5000. What is now the best solution in terms of number of cars and vans to maximize profit? Show all workings.
- Now, assuming that the profit levels stay at $1000 per van and $5000 per car, let us say the company decides to remove any credibility constraint and decides to manufacture the best combination of cars and vans for best profit. However, the supply constraints remain. What is now the optimal number of cars and vans manufactured for highest profit? Show all workings.
- If the company now decides to also manufacture motor bikes, a graphical method may not be best suited for finding the optimal solution. Explain why
this is the case and suggest an alternative method to solve the optimization problem.
Fernando automotives has a main competitor, Excellent Motors (Hereafter EM). There are other car manufacturers besides these two in Brazil. In January 2015, both Fernando and EM plan to launch a new brand of car, and considering an advertising campaign to back it up. Currently, Fernando has a 35% market share, and EM has a 45% market share in Brazil. Both expect to increase this further.
They can advertise either on TV or via newspaper. Market research shows that TV advertisements are more effective when followed by another car advertisement, as this will re-enforce the idea that viewers need to buy a new car. On the other hand, newspaper advertisements are more effective when the competitors do not advertise in newspapers. Both companies do not know what advertising strategy the other company will adapt.
If Fernando and EM both advertise in TV, Fernando will get a 12% increase in market share, and EM will get a 8% increase. If both advertise in newspapers, they will both get a 4% increase. If Fernando advertises on newspaper while EM advertises on TV, Fernando will get a 10% increase while EM will lose 4% of the market. If EM advertises on newspaper while Fernando advertise on TV, EM will get a 3% increase while Fernando will lose 2% of the market.
- Assuming that each manufacturer has only two options (advertising in TV or advertising in newspaper), represent the scenario in a pay-off matrix. (3 marks)
- Is this a zero sum game? Justify your answer. (1 mark)
- What do you understand by a ‘dominant strategy’? Is there a dominant strategy for either player in this scenario? (4 marks)
- What do you understand by a ‘Pure strategy Nash Equilibrium’? Are there any Nash equilibrium states in the scenario mentioned above? (4 marks)
- Explain what you understand by a ‘mixed strategy Nash equilibrium’. (3 marks)
- Now assume that, new market research shows, when both companies advertise on TV, Fernando only gets a 7% increase in market share (rather than the 12%
- Furthermore, when Fernando advertises on TV and EM advertises on newspaper, Fernando in fact gets a 8% increase in market share (not the 2% loss as believed previously). Is there a pure strategy Nash equilibrium now? Come up with a mixed strategy for Fernando that will make it indifferent to the strategy chosen by EM. Similarly, can you devise a strategy for EM that will make it indifferent to the strategy adopted by Fernando? Is there a mixed strategy Nash equilibrium? Show all workings
Question 3 Part A)
Fernando manufacturers are finding that the existing manufacturing plant is not sufficient for their manufacturing needs. Therefore, the management is contemplating either setting up a new plant, or upgrading the existing plant. Continuing without new or upgraded plants also, in theory, is an option. The company is intending that any capital investment will be ‘spread out’ and paid back within a period of ten years. The future of the company after ten years is uncertain and no plan could be made beyond ten years. Setting up a new plant would cost a capital investment of $ 100M (million), while upgrading the existing plant would cost $60M.
The PM team estimates that, if a new plant is set up, there is a 40% chance the annual profit will increase by $25M. There is a 30% chance that the profit increase will only be $15M annually, and 30% chance that the profit increase will only be $5M annually. If the plant is upgraded, there is a 80% chance that the annual profit will increase by $15M. There is a 10% chance that the profit increase will be $10M, and there is a 10% chance that the profit increase will only be $7M. If no upgrade is made, the company estimates there is a 60% chance that there will be no profit increase, There is a 20% chance that there will be a profit decrease of $5M, and there is a 20% chance that there will be a profit decrease of $10 M annually. The repayment for capital investments is not included in these profit predictions, and therefore will need to be deducted.
- Selecting the best investment option is often the first step in successful project management. State four investment option analysis techniques that you know of. Mention in each case whether they are qualitative or quantitative.
- Apply the ‘Expected Monetary Value’ (EMV) method in the above scenario to select the best investment option. Show all workings.
- The utility theory is sometimes used to select an investment option. Explain, using a simple example, why the utility analysis might be better than EMV analysis in certain scenarios.
- Apply the Expected Utility analysis in the above scenario to select the best investment option. Assume that f(x) = ?x is an appropriate utility function. Show all workings. What is the ‘certainty equivalent’ in each option?
- Comment on why the best option differs between the two analysis methods, and state with justification which one is more reliable in this context.
Fernando manufacturers, in fact, decide to upgrade the existing factory, and they launch a project in partnership with a construction company to achieve this. The project is named ‘Project Reshine’ and will be implemented at a rapid phase, so as not to disrupt production. The project has five phases, as shown in Appendix A.
- Calculate the critical path of the project. Which activities are on the
critical path? ii. What is the maximum expected duration of the project in days?
Appendix A Question 4
Anil, Mehmet and Hien work in the chassis wiring department of Fernando ltd, which undertakes assembly line production. Chassis wiring is tedious and needs a lot of patience.
To avoid boredom, Anil, Mehmet and Hien play a game: Once a car arrives for wiring, they measure the time until the next car arrives. If the next car arrives within 3 minutes, Mehmet and Hien pay $1 each to Anil. If the next car arrives within 3 to 5 minutes, Mehmet gets $1 each from Anil and Hien. If the next car arrives between 5 to 10 minutes, nobody pays or gains any money. If the next car arrives after 10 minutes, Hien gets $1 each from Anil and Mehmet. If on average, forty cars arrive for wiring within a shift of eight hours, what is the expected pay-off (or loss) for Anil, Mehmet and Hien respectively?
For this question you can assume that the probability of a random event occurring within T time units of the previous random event is given by p(t?T) = 1 – e-?T, where ? is the rate in which the random events are occurring. Show all steps.
One day, only Anil is working in the chassis wiring department, as Mehmet and Hien are both on leave. Starting work at 8AM in the morning, he receives vehicles in the following order.
Estimated time Vehicle Arrival time needed for wiring Vehicle Type V1 8.00 AM 10 mins Car V2 8.01 AM 3 mins Van V3 8.02 AM 7 mins Car V4 8.03 AM 6 mins Van
Anil can complete work in one of the following order: a) First come first served b) Shortest job first pre-emptive c) Fixed priority pre-emptive d) Round robin. Assume that any calculation takes negligible time for Anil and he would use a 2 minutes quantum if using round robin. Assume that in fixed priority scheduling, Cars have higher priority compared to vans.
- i) Calculate the average response time for each of the above mentioned scheduling methods.
- ii) Calculate the ‘throughput’ (inversion of average completion time) for each of the above mentioned scheduling methods.
iii) Comment on why the scheduling method which has the best average response is not the same as the method with the best throughput.
The company Belstra pvt ltd is a mobile phone manufacturer, and launching a project to create a new model of mobile phone, Ringo Dingo . Refer to the following activity diagram of a project which has activities A, B, C, D, and E.
Which activities are on the critical path? (5 marks) ii. What is the maximum expected duration of the project in days?
iii. The Ringo Dingo prototype Steering Committee must decide whether
the option of using the Sustainable Product Lifecycle on the project is viable, given the associated risk. The estimated project cost is $80m and the additional cost of using the Sustainable Product Lifecycle is $20m, which would bring the total project cost to $100m. The expected revenue from the project based on the current market share is $200m but Belstra estimates that their market share could increase by as much as 20% as a result of marketing the product as “green technology”. This would increase the expected revenue for the project to $240m.
The probability of achieving the full 20% increase in market share and therefore reaching $240m in project revenue if the Sustainable Product Lifecycle is used is 80% with a 20% probability that only $200m in revenue will be achieved. If the Sustainable Product Lifecycle is not used, the probability of achieving $240m in revenue is only 25% with an 75% probability of still achieving $200m in revenue.
Calculate the Expected Monetary Value (EMV) for each option and recommend whether the Sustainable Product Lifecycle should be used.
Show all workings.
- Explain, using the understanding that you gained from this unit, and
using a suitable example, why Expected Monetary Value, Utility theory, and Game theory are all methods complementary to each other and can and must be used in specific contexts each.
Question 6 Part A
Linear Programming is a mathematical model to achieve the best outcome (e.g. maximize profits, minimize costs), given some constraints.
a) Explain what you understand by an optimization problem. (2marks) b) Explain why the ‘simplex method’ is able to solve optimization problems that cannot be solved by the ‘graphical method’ of linear programming.
c) Consider the following scenario:
AgroBig Industries, an Australian company, has just purchased 10400 hectares of land in Panama, and plans to grow export crops on this piece of land. AgroBig senior management has decided they want to grow plantain, and tobacco, but unsure how many hectares should be devoted to each crop. They have the following information.
- Expected profit from a hectare of plantain (banana) is $ 50 thousand per year. Expected profit from tobacco is $42 thousand per year.
- The company plans to employ a work force of two hundred people. On average, they are each estimated to work two hundred days per year. A hectare of plantain needs 70 man-days per year of labour, whereas a hectare of tobacco needs only 2 man-days per year.
- A hectare of plantain needs 250 hours of irrigation per year, whereas a hectare of tobacco needs only 5 hours of irrigation. The total pump hours (irrigation hours) per year available in the facility are 125,000 hours.
Denoting the hectares of plantain as x
1 and the hectares of tobacco as x2, write down the three constraints as mathematical inequalities. ii. What is the expression for total profit per year y in terms of x1 and x2? iii. Write down five corner-point feasible solutions (CPF) to the problem. (State the values of x 1 and x 2 in each case). Show all workings. iv. Which is the best solution among these CPF solutions? What is the corresponding value of profit? Show all workings. Part B
Business environment is often conceptualized as a network (graph).
- a) State three reasons for modelling projects and business organizations as
networks (graphs). (
- b) A graph has 1475 nodes and 1726 links. What is the average degree of the graph? (You may use a calculator to compute the answer).
Game theory is used in economics, social science and computer science to understand and predict the behaviour of people and intelligent entities.
a) Give two examples whereby a Nash equilibrium occurs in a contract management scenario
- b) Give two examples whereby two companies which are competitors in a market and engage in a classical game to promote their products may have a dominant strategy
- c) Consider the following normal game where Blue and Red can both play ‘C’ or ‘D’. The payoffs in each quarter are denoted in red and blue for the respective players. What are the Nash equilibrium states in this game? Is there a dominant strategy for either player? (3 marks)
- d) Oil producing countries need to maintain the price of oil from decreasing, and for this reason they need to limit the total production of oil. Therefore, they try to maximize their profile by predicting the production level of other countries. Consider the following hypothetical situation, where Saudi Arabia and Venezuela get varying profit margins for the oil they produce. Venezuela can either produce 1M barrels per day or 2M barrels per day, while Saudi Arabia can produce 4M barrels per day or 5M barrels per day. If the total production of the two countries is 5M barrels, then the profile margin is $16 per barrel. If the total production is 6M barrels, then the profile margin is $12 per barrel. If the total production is 7M barrels, then the profile margin is $8 per barrel.
- i) Illustrate this scenario in a pay-off matrix.
- ii) State if there are any Nash equilibrium states, and which are they?
iii) Is there a dominant strategy for either country?
The Earned value method is commonly used to quantitatively measure how a project is tracking.
- a) In the case of a project which builds or produces a tangible object which has a market value (such as a house), explain why the ‘Earned value’ method cannot use present market value of the partially finished object to estimate the earned value of the project. Use suitable examples
- b) Explain why the BAC of the TCPI index must be re-assessed if the actual cost (AC) of the project has already exceeded the BAC. What will happen if the original BAC continues to be used? Use suitable examples in your explanation
- c) Give two examples of a scenario where a particular monotonous task in a project gets harder and harder as it is completed? Similarly, give two examples of a scenario where a particular monotonous task in a project gets easier and easier as it is completed?
- d) If a project at a particular time has less Earned Value than its Planned Value but more Earned value than the Actual Cost of the project, what does that mean? Conversely, if a project at a particular time has more Earned Value than its Planned Value but less Earned value than the Actual Cost of the project, what does that mean?
- e) Explain, with three suitable examples, how the EV method could be misleading in describing the health of the project at a given time
Faraliers construction company is developing five separate sets of town houses in Pendle Hill. Each set is considered a project., and the five sets together are considered a program which share resources. The programme employs a specialist painter, who is good at creating luxury-looking interiors with cheap paint. The program also employs a scheduler. Staring on 21st August 2018 (which is considered day zero, with all subsequent days numbered 1,2,3,4 etc), the scheduler receives the following requests for the services of the specialist painter in the following days.
Assume that the specialist painter does not work over the weekend, and that the project numbers indicate their priority to Faraliers Ltd (the lowest number has the highest priority). The arrival days are business days (weekend not included).
The scheduler will use either a) FIFO order b) Shortest Job first pre-emptive c) Fixed priority but no pre-emption d) Round robin basis (with a three-day quantum) in his scheduling of tasks for the specialist painter.
- a) Based on the above, compute the average response time, in terms of business
days, for each scheduling method.
- b) Compute the average completion time, and throughput, in terms of business
days, for each scheduling method.
- c) Based on average response time, which is the best scheduling method? Based
on throughput, which is the best scheduling method? Comment on your findings.
BanRay sunglasses have been experiencing a decline in sales, and to reboost sales they introduce discount system for their retailers who sell faster. A retailer making the next order within five days of the preceding order will receive a 30% discount. If the interval of orders is between five to ten days, the discount is 20%. If the interval is between ten to twenty days, the discount is 15%. If the interval is between twenty days and thirty days (a month), the discount is 10%. If the interval is between 30 days and 60 days, the discount is 5%. No discount is offered if the second order is more than 60 delayed.
Past experience tells that orders from a particular retailer arrives randomly, with an average frequency of two orders per month (you can assume a month consists of 30 days).
- Determine the probability that a customer receives
- Determine the average discount per arriving order.
- If a consignment of sunglasses costs $10,000 on average, and BanRay has 126 retailers, how much money should BanRay set aside each year to offset the discounts offered? For this question, assume that a month always consists of thirty days.
- Now assume that by next year, the sales of BanRay have picked up by 15% as a result if the reward scheme, but BanRay still has the same annual budget to offer discounts. Suggest, with justification, a revised reward scheme for retailers which fully uses the budget but does not exceed it. Show all workings.
Redferni Pvt Ltd produces tables and chairs from wood and plastic. 12 kg of wood and 10 kg of plastic are needed to produce a table, whereas 5 kg of wood and 2 kg of plastic are needed to produce a chair. A table will fetch $300 in profit, where as a chair will fetch only $ 100. On a given month, a maximum of 12,000 kg of wood and 8000 kg of plastic will be available to the company. Since the company sells dining sets as well as individual chairs and tables, at least four chairs must be produced for every table produced.
- a) Use the simplex method to calculate how many tables and chairs must be
produced each month to maximize profit. Show all steps and workings.
- b) Now suppose that in the following month, a market survey is done which
indicates that the maximum demand for a table-and-for-chairs set in a month is 600 sets. Assume that nobody will buy a table by itself, and the company sells no other configuration of sets, but the company sells chairs by themselves. Explain how this new information will chance your calculations, and if necessary recalculate your answer according to this new constraint
- c) Explain why simplex method cannot be readily applied to minimisation
problems like it is applied to maximisation problems
Data analytics can be applied in project management in different scenarios to make decisions.
(a) Describe three scenarios / examples where a Nash equilibrium may exist between two or more competing business entities. In each scenario, describe a catalyst event which will disturb this Nash equilibrium
(b) You have learned about the roles of leaders and managers, and the differences between them. Based on your understanding, discuss whether data analytics is more useful for leaders or managers. Give appropriate examples
- c) Describe two biologically inspired or nature-inspired algorithms and discuss how they can be used in data analytics
- d) Describe the difference between generic Hill-climbing and shotgun Hill-climbing algorithms
- e) Name and discuss two sectors of the industry where data analytics would be most applicable and have the most utility.
January 28, 2023 |
A clipper is a circuit that is used to eliminate a portion of an input signal. There are two basic types of clippers: series clippers and shunt clippers. As shown in Figure 4-1, the series clipper contains a diode that is in series with the load. The shunt clipper contains a diode that is in parallel with the load.
FIGURE 4-1 Basic clippers.
The series clipper is a familiar circuit. The half-wave rectifier is nothing more than a series clipper. When the diode in the series clipper is conducting, the load waveform resembles the input waveform. When the diode is not conducting, the output is approximately 0 V (Figure 4.2). The direction of the diode determines the polarity of the output waveform. If the diode symbol (in the schematic diagram) points toward the source, the circuit is a positive series clipper, meaning that it clips the positive alternation of the input. If the diode symbol points toward the load, the circuit is a negative series clipper, meaning that it clips the negative alternation of the input (Figure 4.11).
Ideally, a series clipper has an output of when the diode is conducting (ignoring the voltage across the diode). When the diode is not conducting, the input voltage is dropped across the diode, and .
Unlike a series clipper, a shunt clipper provides an output when the diode is not conducting. For example, refer to Figure 4-1. When the diode is off (not conducting), the component acts as an open. When this is the case, and form a voltage divider, and the output from the circuit is found using
When the diode in the circuit is on (conducting), it shorts out the load. In this case, the circuit ideally has an output of . Again, this relationship ignores the voltage across the diode. In practice, the output from the circuit is generally assumed to equal 0.7 V, depending upon whether the circuit is a positive shunt clipper or a negative shunt clipper. The direction of the diode determines whether the circuit is a positive or negative shunt clipper. The series current-limiting resistor () is included to prevent the conducting diode from shorting out the source.
A biased clipper is a shunt clipper that uses a dc voltage source to bias the diode. A biased clipper is shown in Figure 4-2. (Several more are shown in Figures 4.9 and 4.10). The biasing voltage () determines the voltage at which the diode begins conducting. The diode in the biased clipper turns on when the load voltage reaches a value of . In practice, the dc biasing voltage is usually set using a potentiometer and a dc supply voltage, as shown in Figure 4.10.
Clippers are used in a variety of systems, most commonly to perform one of two functions:
The first application is apparent in the operation of half-wave rectifiers. As you know, these circuits are series clippers that change an alternating voltage into a pulsating dc waveform. A transient is an abrupt current or voltage spike of extremely short duration. Left unprotected, many circuits can be damaged by transients. Clippers can be used to protect sensitive circuits from the effects of transients, as illustrated in Figure 4.12.
Clampers (DC Restorers)
A clamper is a circuit that is designed to shift a waveform above or below a dc reference voltage without altering the shape of the waveform. This results in a change in the dc average of the waveform. Both of these statements are illustrated in Figure 4-3. (The clamper has changed the dc average of the input waveform from 0 V to +5 V without altering its shape.)
There are two basic types of clampers:
Both types of clampers, along with their input and output waveforms, are shown in Figure 4.17. The direction of the diode determines whether the circuit is a positive or negative clamper.
Clamper operation is based on the concept of switching time constants. The capacitor charges through the diode and discharges through the load. As a result, the circuit has two time constants:
Since is normally much greater than , the capacitor charges much more quickly than it discharges. As a result, the input waveform is shifted as illustrated in Figure 4.16.
A biased clamper allows a waveform to be shifted above (or below) a dc reference other than 0 V. Several examples of biased clampers are shown in Figure 4-4.
The circuit in Figure 4-4(a) uses a dc supply voltage (V) and a potentiometer to set the potential at the cathode of . By varying the setting of , the dc reference voltage for the circuit can be varied between approximately 0 V and the value of the dc supply voltage.
The zener clamper in Figure 4-4(b) uses a zener diode to set the dc reference voltage for the circuit. The dc reference voltage for this circuit is approximately equal to . Note that zener clampers are limited to two varieties:
A voltage multiplier provides a dc output voltage that is a multiple of the circuits peak input voltage. For example, a voltage doubler with a peak input of 10 V provides a dc output that is approximately 20 V. Two voltage doublers are shown in Figure 4-5.
Each of the circuits in Figure 4-5 provides a dc load voltage that is approximately twice the value of the peak source voltage. The half-wave doubler gets its name from the fact that the output capacitor () is charged during the positive half-cycle of the input signal, as shown in Figure 4.21. In contrast, the output capacitor in the full-wave doubler () is charged during both alternations of the input cycle, as shown in Figure 4.23. Note that the output from a full-wave doubler has less ripple than the output from a comparable half-wave doubler.
The voltage tripler is very similar to the half-wave voltage doubler. If you compare the tripler shown in Figure 4-6 to the circuit in Figure 4-4(a), you will see that the circuit made up of , , , and is actually a half-wave voltage doubler. This circuit charges to a value of . During the negative alternation of the input cycle, is charged to approximately . The voltage across the series combination of and is approximately . Since and the load are in parallel with the series combination of and , and are also approximately equal to .
Voltage multipliers reduce source current by roughly the same factor that they increase source voltage. For example, a voltage tripler produces a dc output voltage that is approximately three times the peak source voltage. At the same time, its maximum output current is roughly one-third the value of the source current. As such, voltage multipliers are commonly used in high-voltage, low-current applications. They can also be used to produce dual-polarity output voltages in power supply applications (Figure 4.26).
LEDs are most commonly used as power indicators, level indicators, and as the active elements in multisegment displays.
The power indicator on any electronic component is most likely an LED. When the component is turned on, power is supplied to the LED. The LED lights, indicating that the component is on. A level indicator is used to indicate when a signal voltage reaches a designated level. Several examples of level indicators are shown in Figure 4.27.
LEDs are most commonly used in multisegment displays. These displays are used to display alphanumeric symbols, such as letters, numbers, and punctuation marks. A typical seven-segment display is shown in Figure 4.28. Several other common displays are shown in Figure 4.29.
Each type of display is available in either a common-anode or common-cathode configuration. A common-anode display has a single anode (+V) input that is applied to all the LEDs in the display. Individual segments are lighted by providing a ground path to the appropriate cathodes. In contrast, a common-cathode display has a single cathode (ground) pin that is connected to all LEDs in the display. Individual segments are lighted by providing a +V input to the appropriate anodes. Note that many multisegment displays require a current-limiting resistor in series with each LED in order to restrict device current.
Another type of multisegment display, called a liquid-crystal display (LCD), contains segments that reflect (or do not reflect) ambient light. LCDs typically require less power than LED displays and thus are better suited for use in low-power electronic systems, such as portable phones.
Diode Circuit Troubleshooting
A variety of fault symptom tables are listed in this chapter for clippers, clampers, multipliers, and displays:
Multisegment displays are often controlled by ICs called decoder-drivers. These ICs provide the active +V (or ground) inputs required for the individual segments. The most common multisegment display fault is the failure of one or more segments to light. When this occurs, check the input to the common pin. Assuming that the potential there is correct, check the inputs from the decoder-driver. If the inputs to the display are correct, the display must be replaced. If not, the decoder-driver (and current-limiting resistor) must be tested. |
In Mielec, the residential buildings constructed until World War II have not retained their original characteristics, as both interiors and elevations have been renovated and modernized. However, the villa of the general director of the PZL Airframe Plant has been nearly restored to its original state. The pre-war blocks of flats can be seen at 2, 4, 5 and 6 Kochanowskiego street (for ordinary workers), at 1 and 3 Kochanowskiego street (for foremen), at 1, 2, 3 and 4 Asnyka street and 4 Fredry street (for ordinary workers), at 20 and 22 Niepodległości street (for office workers), at 1 Czarneckiego street and 6 Skłodowskiej-Curie street (for engineers) and at 8 Skłodowskiej-Curie street (for department managers).
The construction of the housing estate was related to the influx of the workforce to the PZL Airframe Plant No. 2 in Mielec-Cyranka. The new plant needed workers, as well as well-qualified technical and office staff. This resulted in the need to create a housing estate for the plant employees and their families. The design of the urban layout was the responsibility of the construction project management team, lead by Major Piotr Czyżewski, an architect and civil engineer. The plan comprised an area of about 20 hectares. It provided for the construction of a housing estate made of 27 buildings of various types, as well as a director’s villa, two deputy directors’ villas, a social centre, a primary school, a hospital, a church, a sports stadium and a post office. The design was inspired by the natural shape of a sand dune. In its curve, an infrastructural composition was located that resembled a half of a Venetian window.
From 1st September 1937 to 1st September 1939 (when World War II began), construction of six CIR-related developments was going on. A military airfield was being built for a new bomber wing. To facilitate test flights, a concrete taxiway was built, as well as the main runway (of hexagonal concrete blocks). In 1939, the foundation for one military hangar was made, and the excavation for the other was completed. Nearby, a company employed by the Construction Department at the Ministry of Military Affairs had gathered construction materials and steel frames for hangar assembly. At the beginning of the Nazi German occupation, the airfield facilities were taken over by the Ministry of Aviation of the Third Reich.
As regards the aviation plant facilities, the main production hall was built (for the PZL aircraft), as well as the management building, a plant hangar, the warehouses and the necessary infrastructure of the plant. The plant in Mielec was a branch of the PZL WP-1 Warszawa-Okęcie Airframe Plant.
The PZL housing estate comprised 16 multi-residential buildings, a block of flats for specialists, as well as a one-family villa and a semi-detached villa for directors. The design was based on the application of basic square modules (83m x 80m), on which two residential buildings were placed. Each building had 3 floors, as well as a basement with a shelter. The attics contained utility rooms and laundries.
The area near the housing estate, across former Kędziora street, was earmarked for levelling for the future allotments, promenades and parks. Amidst the green belt, a hospital and a multi-residential building for the medical personnel were to be built (until September 1939, the hospital buildings had been built, with no finish) as well as a church with a presbytery, and a post office. From 1937 to 1938, a company road connected the estate and the plant.
Between the housing estate and the railway line, an excavation was made for the foundations of the Leo plant, manufacturing military shoes. Near the excavation a depot appeared, where the construction materials needed for building the plant were stored. In January 1939, a road connected two towns: Mielec and Kolbuszowa.
After the state authorities decided to start manufacturing, under licence, civilian airplanes (for flying clubs) and military ones (for the Polish army – first with piston engines, i.e. the Po-2 or CSS-13 aircraft, then with jet engines, i.e. MiG-15 and MiG-15 bis, under the names of Lim-1 and Lim-2), another decision was made: to expand the housing estate to accommodate the new aviation plant workers.
In 1957, the decision was made to transfer the title to the housing estate to the municipal authorities and the Municipal Residential Building Authority was established.
Today, the blocks of flats dating from the Central Industrial Region period stand next door to those built after World War II. Their characteristic stairwells have survived, with wooden banister supported by a steel structure, so typical of that period. The vertical glazing of the stairwells is unique, too, although the building elevations are void of any architectural details.
Another item on the Route – currently: at 8 Marii Skłodowskiej-Curie street – is the former block of flats for department managers (today, it houses a county sanitary and epidemiological station). The building contained six flats of a usable area of 130 m2. Each of the flats had 4 rooms, a corridor, a kitchen, a bathroom and a servant room. The rooms located in the outermost section of the flats were arranged as enfilade rooms. The design of the block emphasized symmetry, with the accentuated central entrance part. Little oblong windows were placed there, five rows of which produced a mosaic effect. The lateral façades had two types of windows, but their rhythm and regularity was retained, which produced a visually attractive shape. The whole building was given a clinker base course.
Now we move on to the deputy directors’ villa (at 16 Chopina street). This semi-detached house is marked by the connected balconies on its front elevation. Its unassuming visual expressiveness was compensated for by its impressively vast usable area – 220 m2 – and excellent layout. Each of the two residences had its own stairwell, and the two-floor arrangement was subdivided into a day-time and a night sections. Each flat consisted of five rooms, two bathrooms, a kitchen, a servant room, a vestibule and the halls on both floors. Furthermore, there were a garage, a boiler room, a fuel store, and the utility rooms in the basement. In 2015, the authorities of the Mielec county sold the villa. Currently, it is unused (before that, it had been the seat of the County Family Support Centre, which moved to Żeromskiego street).
Another site on the Route is the company villa of the general director of the PZL aviation plant in Mielec (at 18 Chopina street). It is considered to be the architectural masterpiece of the whole housing estate. Today, it is the seat of the MARR S.A. Regional Development Agency. It was designed as a detached house. Vast stairs, culminating in an arcade supported by three round pillars, led to the residence. The porch offered entry to two functionally separate sections of the house. The residential part comprised a spacious room and a dining room, which could be accessed from the main hall, together with an enfilade and a garden terrace. The utility section, located on the opposite side of the house, consisted of a kitchen with a pantry, a servant room and a small annexe for the sideboard. The dressing room, the bathroom with a separate toilet, as well as four bedrooms were located upstairs. The usable area was ample indeed – it amounted to 257 m2. The basement contained additional utility rooms and a garage.
The central part of the building structure was supported by a reinforced concrete chimney, which held binding joists together. Such a solution made it possible to create space without resorting to pillars or other load-bearing elements. The design and aesthetic quality of this residence stand out from the other residential buildings constructed at the time of the Central Industrial Region. It was designed as a complex of three cuboids merging with one another, whose compositional dominant was the highest part: a glazed corner, whose role was that of wind protection. A unique visual element were the front terrace pergolas. The house has acquired its unique form due to the arcade cut out in its main contour. A worthwhile accent are the bull’s eye windows paired up on the front elevation. The villa was given flat roofs, with small drip moulds. It makes a dynamic impression, which is emphasized by the stairs around the house. The harmony of various elements and the well-thought-out sophisticated design of the façade prove the architectural vision behind the house. It is one of the best achievements of one-family developments within the framework of the CIR project. Today, one of the walls features an exhibition of photographs illustrating the origins and the post-war fate of the villa.
The last item on our Route is the ‘Jadernówka’ Photography Department of the Regional Museum in Mielec, located near the bus and railway stations. To reach it, we go along Niepodległości avenue, cross Torowa street, turn into Jadernych street and get to No. 19. The seat of the museum is a brick house built by the photographer August Jaderny from 1904 to 1906. It was designed by Stanisław Bronisławski as a ground-floor only building, with one utility room in the attic. The photography atelier of the Jaderny family was open in the this house for nearly 80 years. Since 1987, it has been a museum.
Some photographs exhibited have captured Mielec in the years when the Central Industrial Region was built. Apart from a collection of photographs, the museum in the former Jaderny atelier offers also ‘museum lessons’. Famous for its unique collection of old cameras, it boasts more than 40 thousand exhibits, such as photographs, negatives, nearly 300 historical cameras, stands, flash guns, light meters, darkroom and laboratory equipment, enlargers, glass plate boxes, film cameras, projectors, as well as photography-oriented magazines and publications. This theme collection is especially worth visiting.
After leaving the museum, we head for the bus station at Kazimierza Jagiellończyka street (near Niepodległości street) or the railway station (near Głowackiego street). |
What is a Moving Average in Excel?
A moving average in Excel is defined as the averages that are calculated using different subsets from an entire dataset. It is also called a rolling mean or a moving mean. Here, the time remains the same, while the average is moving because new data keeps adding to the dataset.
For instance, let us calculate the sales’ simple moving average (SMA) over one month. First, write the average function for a set of three values beginning from the first three and calculate accordingly. Now, enter the function =AVERAGE($B2:$B4) in cell C4, where we can get the average of the first three months. Thus, you get the moving average of every three months for a year.
Table of contents
- What is a Moving Average in Excel?
- Moving Average() Excel Formula
- What are the types of moving average in Excel?
- How to Use Moving Average Function in Excel?
- Important Things to Remember
- Frequently Asked Questions (FAQs)
- Download Template
- Recommended Articles
- The moving average in Excel is a series of averages calculated from data points of different subsets in a complete data set.
- It can be calculated simply using the AVERAGE() function in Excel. We can also use the Data Analysis in-built tool of Excel to calculate the moving average.
- You can plot graphs to check the smoothening of the fluctuations in the data and easily recognize trends.
- The OFFSET function with AVERAGE helps you calculate the moving average for varying periods.
Moving Average() Excel Formula
The simplest formula for finding the moving average in Excel is through the AVERAGE Excel function. The syntax of the function is as follows:
- number1: (mandatory) The number or range for which average is required
- number2: (Optional) Any other numbers or cell ranges for which an average is required. You can give numbers up to 255.
The moving average for a variable time of N weeks, months, or years can be calculated using a combination of OFFSET with AVERAGE.
Excel VBA – All in One Courses Bundle (35+ Hours of Video Tutorials)
If you want to learn Excel and VBA professionally, then Excel VBA All in One Courses Bundle (35+ hours) is the perfect solution. Whether you’re a beginner or an experienced user, this bundle covers it all – from Basic Excel to Advanced Excel, Macros, Power Query, and VBA.
What are the types of moving average in Excel?
The main types of Moving Average in Excel are:
#1 – Simple moving average in Excel (SMA)
It is the average of a subset of data at given intervals. For instance, if you have the temperature of a city for ten days, if you calculate the sum and divide it by 10, you get the 10-day moving average.
#2 – Weighted moving average in Excel (WMA)
Here, there is more emphasis on recent data compared to the weightage of past data.
#3 – Exponential moving average (EMA)
It emphasizes the most recent data points as compared to past ones and reacts more significantly to recent changes.
How to Use Moving Average Function in Excel?
The moving average is calculated based on the time interval. Usually, it is estimated simply in three ways:
- Calculate the moving average in Excel manually entering the AVERAGE() function
- Entering AVERAGE through the Excel ribbon
- Accessing Moving Average through the Excel ribbon Data tab
Manually entering the AVERAGE() function
Let us look at how we can find the simple moving average for the sales data of an air-conditioning company for one fortnight. For this, you may use the simple AVERAGE formula.
Step 1: We have the data on the number of air-conditioners sold for 15 days. But first, we must find the 5-day average. Then, to use the AVERAGE function, you must specify the cell references range as a relative reference, especially the rows.
So, you may either use =AVERAGE(B2:B7) or =AVERAGE($B2:$B7) in cell C7.
Step 2: Now, press Enter. You get the first five values’ average from B2 to B7. As you copy the formulas down the column, the value of the row changes and includes new values for the 5-day average.
We enter the formula in C7 because, if it is started in the first cell C2, there won’t be sufficient data for calculating the 5-day average.
Thus, you can enter the formula and calculate the 5-day average to understand sales trends.
Entering AVERAGE through the Excel Ribbon
To find the moving average, place the cursor where you want to see the result and enter the AVERAGE() function through the Excel ribbon.
- Go to the Formulas tab, and in the Function Library, click on More Functions.
- Here, click on Statistical and select the AVERAGE function.
Accessing through the Excel Ribbon Data Tab
You can also access the moving average in Excel through the Data tab in the Excel ribbon.
Step 1: Go to the Data tab and click on Data Analysis in the Analysis group.
Step 2: In the pop-up dialog window, click on Data Analysis, and you can scroll and select Moving Average from the list. Now, click OK and enter the details required to calculate the moving average.
These three methods are effective in calculating the moving average for different datasets.
Let us look at various examples of calculating the moving average.
Here’s a simple example of how to apply the Data Analysis option to obtain the moving average.
Below are the weekly sales details of a salesman selling a product. But first, we must calculate the moving average for weight 2.
Step 1: Go to the Data tab and select the Data Analysis option in the Analysis group.
Step 2: You get a data analysis pop-up window. Select the option “Moving Average.”
Step 3: Now, you get a pop-up window.
- Enter the input range, which is the range of the data set. Here it is from B2 to B9.
- Enter the interval, 2, in this case.
- Enter the output range where you want the output, C2:C9. Press OK.
Step 4: Here, we have checked the Chart Output check box. Hence, we get the output with a chart for the moving average. Since the first cell C2, did not have enough data to find the moving average, we get #N/A.
Let us calculate the moving average for a city’s average temperature over a period of 12 months. Here, we are doing it for 3- and 6-month intervals. Also, plot a graph of the values.
Step 1: We must apply the formula below to calculate the three-month average.
Step 2: Go to cell C4 and type =AVERAGE(. Select the cells from B2 to B4 since we need the moving average of 3 months.
- Hence, you get the formula =AVERAGE(B2:B4) in cell C4.
- Press Enter. Copy the formula to all cells from C4 to C13. You get the three-month moving average of the temperature.
- If we add the formula =AVERAGE(B1:B3) in cell C2, you do not get an error.
- It is because the AVERAGE() function tends to ignore text values and empty cells and finds the average of the numbers present in the range.
- Hence, you get a moving average of 12, as 12 is the only number in this range.
Step 3: To find the six-month moving average, add the formula =AVERAGE(B3:B8) in cell D8.
Step 4: Now, drag the Autofill handle up to D14. Here, you get the six-month moving average for the different data points.
Step 5: Now, let us plot the graph for these two moving averages to smooth out any fluctuations.
- Select the range B3 to B14, go to the Insert tab, and choose the required chart. We have chosen the 2D line chart here.
- Next, click on the + sign on the top right of the chart and select the Trendline option.
- Click on the arrow in the Trendline option and choose the Two period Moving Average option. You get two trendlines.
- To convert them to moving-average lines, go to the Format tab and select the trendline you wish to format. Click on the Format Selection option.
- Now, on the right side, in the Format Trendline option, you can choose the time interval, a custom name, and the Moving Average option.
- Repeat the same for the 6-month interval trendline as well.
Thus, we obtain a moving average in Excel chart for our data. It shows the smoothening of the fluctuations in the original data.
We have so far calculated the simple moving averages for a dataset through different methods. Now, let us calculate the moving average when the number of periods is variable.
Finding the moving average for the last n values of a column
Step 1: Now, let us consider the OFFSET function. It has five arguments.
- The first argument specifies the starting point of the range for which we want to calculate the average. So, it is B2 in this case.
- The following argument specifies how many columns you want to move down. In this case, let us consider the last four months’ average. Hence, it should go back four times from the bottommost cell.
- The COUNT function will identify the last cell in the column, and then you can subtract four from it. So, the second argument will be COUNT(B4:B100) – 4. We choose a considerable number of 100 as the range to make the data set expandable.
- Next, we wish to remain in the same column. So, argument 3is 0. If you want to move one column to the right, you give 1. One column to the left means -1.
- The last two arguments specify the height and width. So, 1,1 means a single cell. 2,1 means two rows and one column. Here, we need the last four rows; hence we specify it as 4,1. Next, we must find the average of this selected range. Therefore, type the following formula in cell E4.
=AVERAGE(OFFSET(B2,COUNTA(B2:B100) – 4,0,4,1))
Step 2: Press Enter. Thus, you get the moving average of the last four months.
Step 3: Now, you can dynamically change the values for the last few months of any count.
Important Things to Remember
- The moving average eases out fluctuations in the data and identifies areas of support and resistance.
- Here, we can find it using the simple statistical function AVERAGE in Excel.
- Microsoft Excel has a built-in Data Analysis ToolPak to calculate simple moving averages.
- It is used mainly in fields such as weather forecasting with temperatures to understand the trends and by financial analysts to get the average value of a security over time.
Frequently Asked Questions (FAQs)
When the range of cells provided does not contain enough data points in the form of numbers, you get #N/A when you use the Data Analysis tool in Excel for the moving average.
The moving average is used to find the average of subsets in a data set which helps understand trends in weather forecasting and financial analysis of stock prices. In addition, it helps smooth out fluctuations in the curve due to the data points.
The seven-day moving average can be calculated using the AVERAGE function by specifying the range of cells to contain 7 days. For example, =AVERAGE(B2:B8)
The moving average in Excel can be plotted when we use the in-built Analysis Tool by selecting the Chart Output checkbox in the pop-up window. Also, we can plot it by selecting the appropriate chart from the Charts group under the Insert tab.
This article must help understand the Moving Average in Excel, with its formula and examples. We can download the template here to use it instantly.
Guide to Moving Average in Excel. Here we explain how to use moving average, its formula, examples & downloadable excel template. You can learn more from the following articles – |
In mathematics, a free module is a module that has a basis – that is, a generating set consisting of linearly independent elements. Every vector space is a free module, but, if the ring of the coefficients is not a division ring (not a field in the commutative case), then there exist non-free modules.
Given any set S and ring R, there is a free R-module with basis S, which is called the free module on S or module of formal R-linear combinations of the elements of S.
A free abelian group is precisely a free module over the ring Z of integers.
- is a generating set for ; that is to say, every element of is a finite sum of elements of multiplied by coefficients in ; and
- is linearly independent, that is, for distinct elements of implies that (where is the zero element of and is the zero element of ).
An immediate consequence of the second half of the definition is that the coefficients in the first half are unique for each element of M.
If has invariant basis number, then by definition any two bases have the same cardinality. The cardinality of any (and therefore every) basis is called the rank of the free module . If this cardinality is finite, the free module is said to be free of rank n, or simply free of finite rank.
Let R be a ring.
- R is a free module of rank one over itself (either as a left or right module); any unit element is a basis.
- More generally, a (say) left ideal I of R is free if and only if it is a principal ideal generated by a left nonzerodivisor, with a generator being a basis.
- If R is commutative, the polynomial ring in indeterminate X is a free module with a possible basis 1, X, X2, ....
- Let be a polynomial ring over a commutative ring A, f a monic polynomial of degree d there, and the image of t in B. Then B contains A as a subring and is free as an A-module with a basis .
- For any non-negative integer n, , the cartesian product of n copies of R as a left R-module, is free. If R has invariant basis number (which is true for commutative R), then its rank is n.
- A direct sum of free modules is free, while an infinite cartesian product of free modules is generally not free (cf. the Baer–Specker group.)
Formal linear combinations
Given a set E and ring R, there is a free R-module that has E as a basis: namely, the direct sum of copies of R indexed by E
Explicitly, it is the submodule of the cartesian product (R is viewed as say a left module) that consists of the elements that have only finitely many nonzero components. One can embed E into R(E) as a subset by identifying an element e with that of R(E) whose e-th component is 1 (the unity of R) and all the other components are zero. Then each element of R(E) can be written uniquely as
where only finitely many are nonzero. It is called a formal linear combination of elements of E.
A similar argument shows that every free left (resp. right) R-module is isomorphic to a direct sum of copies of R as left (resp. right) module.
The free module R(E) may also be constructed in the following equivalent way.
Given a ring R and a set E, first as a set we let
We equip it with a structure of a left module such that the addition is defined by: for x in E,
and the scalar multiplication by: for r in R and x in E,
Now, as an R-valued function on E, each f in can be written uniquely as
where are in R and only finitely many of them are nonzero and is given as
(this is a variant of the Kronecker delta.) The above means that the subset of is a basis of . The mapping is a bijection between E and this basis. Through this bijection, is a free module with the basis E.
The inclusion mapping defined above is universal in the following sense. Given an arbitrary function from a set E to a left R-module N, there exists a unique module homomorphism such that ; namely, is defined by the formula:
and is said to be obtained by extending by linearity. The uniqueness means that each R-linear map is uniquely determined by its restriction to E.
Many statements about free modules, which are wrong for general modules over rings, are still true for certain generalisations of free modules. Projective modules are direct summands of free modules, so one can choose an injection in a free module and use the basis of this one to prove something for the projective module. Even weaker generalisations are flat modules, which still have the property that tensoring with them preserves exact sequences, and torsion-free modules. If the ring has special properties, this hierarchy may collapse, e.g., for any perfect local Dedekind ring, every torsion-free module is flat, projective and free as well. A finitely generated torsion-free module of a commutative PID is free. A finitely generated Z-module is free if and only if it is flat.
- Adamson, Iain T. (1972). Elementary Rings and Modules. University Mathematical Texts. Oliver and Boyd. pp. 65–66. ISBN 0-05-002192-3. MR 0345993.
- Keown, R. (1975). An Introduction to Group Representation Theory. Mathematics in science and engineering. 116. Academic Press. ISBN 978-0-12-404250-6. MR 0387387.
- Govorov, V. E. (2001) , "Free module", in Hazewinkel, Michiel (ed.), Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4. |
Image Source: PIAB This article is about the scientific project that was a joint venture between the University of Wisconsin and the University of California, Berkeley. This paper was co-authored by David S. Smith, PhD, associate editor and a co-author on the research article. Abstract The University of Wisconsin, Berkeley, is the only institutional scientific center in Berkeley. The San Francisco Bay Area is a large, metropolitan city, and the University is a small, privately funded university. The research work described here was funded by the California State University, Sacramento. The research was supported by the UC Berkeley Center for Excellence in Science and Technology. The research interest of this paper was funded by a grant granted by the California Department of Education. Keywords and Phrases 1 Introduction 1.1 Geometry and Geometry-Related Work 1 Geometry is the study of an arrangement of point or segmented objects. A geometrical arrangement is a set of points or segments which are arranged on a plane with a line segmented along the plane. The relationship between these two objects is an arrangement of points or segmented points. 1 The basic concept of a pair of points or small objects is defined in the book of S. M. Hill and K. J. Stoddard (Sci. Rep. No. 1, 1984, vol.
Statistics Scope In Pakistan
21, no. 1, pp. 1-20). 1 In this book, the geometric concepts of the two points or small points are referred to as the “Sections”. The references to theSections are in the book. 2 Geometry, Geometries, and the Geometry of Objects 2.1 Geometrical Relations 1 1.1.1 Introduction 1.2 Geometrical relations between two points or objects are defined in the books of S. L. Peacock, R. B. Smith, and R. Baddeley (Sci Adv. Stat. Anal. 17, 1980, pp. 111-130). The book is a textbook on geometric relations.
Statistics Book By S K Mangal
It is also an effective textbook that is the foundation of the book. It deals with related topics in geometric relation. 3 Geometry and the Geography of Objects 3.1 Geometric Relations 3.2 Geometric Relations between Points or Sorts or Segments 3) The Geometry of Segments 3.3 Segments Appendix 1 First Introduction: Geometric Relations and Geometry 2 Introduction 2 New Geometrical Concepts and New Definitions 3 New Definitions of the Geometrical Principle 3 Differential Geometry 3.4 Geometric and Geometric Structures 3 Exercises 3 The Geometry and its Reflection 4 New Definitions of Geometric Geometry The Geometric Principle 5 The Geometric Principle: Elements of the Geometric Principle and New Definitions of its Reflection and New Definitions for Geometrical Relation 5 New Definitions of their Reflection and their Definitions for Geometric Relation The Geometry of Elements 5.1 Introduction: Geometrical and Geomancy 5 Introduction 5 Geometric Types 5 Object Types 6 Geometric Types of Objects The Geometrical Geometries 6.1 Geomancy and Geometry of Object Types The Geomancy of Object Types and Geometrical Analogy 6 The Geometric Geometries of Objects and Objects Types The Equivalence of Objects and Geometry Types The Difference Between see this page Types and Geomancies 7 Geometrically Relating Objects and Objects 7.1 Geodesics 7 Introduction 7 The Geometrical Method of Describing Objects 1 I. Introduction I. Introduction 1 I and this chapter are about geometric relationships between objects and points. The geometric relationships between points are the relationship between the points and their points. 2 I. Introduction and Definition of Geometrical Dependence 2 I and this section are about the relationship of points with points. The relationship of points and objects is the relationship between points and points and objects. 3 I. IntroductionStatistics Relation With Other Sciences in this article “the relationship between the relationship between mathematics and the biological sciences.” The relationship between mathematics is a fundamental concept in mathematics, and it applies to many fields of science. The field of mathematics is concerned with the study of the structure and composition of fields.
Statistics Z Score Calculator
The system of algebraic equations describes the forms of numbers, and the system of trigonometric equations describes the calculus of numbers. In mathematics, there are some mathematical concepts like differential equations, differential equations, and the like. There are numbers, numbers, and numbers. The numbers are expressed as equations of the form or or a function. You can write a number in the form of a function. The function is a mathematical operation and it is called a function. A function can be expressed in the following way: The function can be written in the following form: where in the first line the second can be written as The functions can be expressed as functions of the form: E = x + y where the functions x and y are real numbers. The functions x and z are positive real numbers and a function on the set of real numbers is a function on a set of real number. A function on the function set is a function that is a certain function that is not a function on that set. In mathematics, a function is called a type-function. A type-function is a function by itself and can be expressed by a function on each set. If a function x is a type-functor then the function is a type function. If a type-fractional function is a function, then the function x is called a m-function. Equivalently, a function on half-procesional sets can be expressed the following way. E = (x + y) / y This is a class-fractionation. When we are given a function x the function is called the type-function of x. If the function is written in the form: x = (x / y) / (x + z) then the function x can be expressed using the function x = x / y / (x / z) / (z / x) / (y / z) And so on E is the function that is defined by a type-func. Definition: useful content One of the most important concepts in mathematics is the concept of the type. This is because for a function to be a type function a type-type must be defined. Let’s look at an example.
Statistics Book Data Science
Example 1: We have a function x, such that x < 0. As you can see, there is a type that we need to define. We are given a class-function x, which is defined by the following types: x is a type. x has a type-free function. x has two types, namely, a type-classifier and a type-defining function. The type-classifiers are functions that take a certain classifier, such as a normal function and a function that takes a certain class function and gives the result. Now, we have a function with a type-classesifierStatistics Relation With Other Sciences “The most important thing in the science is not getting the results, but the process of doing it.” The relationship between science and science is one of the most fundamental and important questions in all of our lives. When we question the status of the science, the relationship with science is the most important factor of our lives, and it is essential for us to act to change the status of science. This is one of my favorite ways to describe what science is, and this is how I think science and science are what makes science itself. Science is a process of knowing and understanding the world around us. We are born with the ability to see the world and understand the world. We learn how to see the universe and understand the universe in a way that is natural, and we learn to see the sky and the sky and understand the sky and know the sky in a way so that we can see the sky. In the scientific process, the process of knowing the world is the first step. The process of knowing is not just knowing the sky and knowing the sky in the sky, but also knowing the world around you. We learn to see, understand, and know the world around our eyes, and we understand the world around the eyes of others. We learn the world around ourselves in the way that we see the universe. The science is the science that is connected with the universe. The science of science is the way that the universe is connected to the universe. Science is the science of doing things that we need to do to know the world.
Statistics Book Gupta
Science is a process that is influenced by the nature of the universe. Because we are a complex process of knowing, we have to learn to understand the universe, the world around, and the world around others. We are born with a primary purpose to be able to see and understand the cosmos in a way we can see and understand in a way. While science is the process of seeing and understanding the cosmos, the science of science involves knowing the world and understanding the universe. This is the process that is the most fundamental ability to get the results of the science. The science that we have in our life is the science based on the science of the universe and the science of seeing and knowing the world. And this is where science comes in. Science can be one of the oldest, most important and most important things in the science. This means that we are the oldest science. At the same time that we are learning the scientific process of knowing our world, we are also learning how to understand the science of understanding the world. Understanding the world is one of our greatest achievements. Understanding the universe is one of all the great achievements of science. One of the most important things that science can do is to understand the world and to understand the cosmos. If you are thinking of science, you are thinking that it is the science in the science of knowing the universe. And if you are thinking about science, you have to know the universe. Most of the science is about understanding the world, and it will take a lot of effort to understand the physics of the universe, but you can learn to understand what is in the universe. It is one of your greatest achievements. As I say, understanding the universe is the science involved in understanding the world and the science in understanding the universe and understanding the sky and understanding the earth and understanding the heavens and understanding the seasons. That’s the science that science is involved in. The science in the Science involving understanding the universe, understanding the sky, understanding the earth, understanding the heavens, understanding the seasons, understanding the oceans, understanding the continents, understanding the planets, understanding the sun, understanding the moon, understanding the winter, understanding the summer, understanding the ice, understanding the rivers, understanding the wind, understanding the weather, understanding the stars, understanding the mountains, understanding the seas, understanding the air, understanding the water, understanding the sea, understanding the atmosphere.
Statistics Ki Book
These are the science that we are involved in in our life. The science from science that is in the science involved. There is a great deal of knowledge that is in science. You can take your degree from science and you can study it. The science involved in a PhD is the science related to the science that the science involved is involved in, and that is the science involving understanding the |
ENVIRONMENTALLY AND SOCIALLY RESPONSIBLE OPERATIONS By NAZLI TURKEN A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2014
© 2014 Nazli Turken
To my family
4 ACKNOWLEDGMENTS I first would like to thank my supervisor committee chair Dr. Janice Carrillo for her continuous patience, encouragement and support. I would also like to extend my gratitude to my committee member s Dr. Anand Paul, Dr. Tharanga Rajapakshe, and Dr. Joseph G eunes. Special thanks to my family and friends for their continuous love and support.
5 TABLE OF CONTENTS page ACKNOWLEDGMENTS ................................ ................................ ................................ .. 4 LIST OF TABLES ................................ ................................ ................................ ............ 8 LIST OF FIGURES ................................ ................................ ................................ .......... 9 ABSTRACT ................................ ................................ ................................ ................... 10 CHAPTER 1 INTRODUCTION ................................ ................................ ................................ .... 12 1.1 Environmental Implications of Strategic Supply Chain Decisions: The Role of Location and Scale ................................ ................................ .......................... 12 1.2 Resource Allocation for Non Profits: A Case of Animal Shelters ...................... 13 1.3 Resource Allocation of Animal Shelters with Capacity Expansion .................... 14 1.4 Overview of the Dissertation ................................ ................................ ............. 15 2 ENVIRONMENTAL IMPLICATIONS OF STRATEGIC SUPPLY CHAINS: THE ROLE OF LOCATION AND SCALE ................................ ................................ ....... 16 2.1 Motivation ................................ ................................ ................................ ......... 16 2.2 Literature Review ................................ ................................ .............................. 20 2.2.1 Facility Location and Capacity Acquisition Literature .............................. 21 2.2.2 Contemporary Environmental Models in Operations Research ............... 21 2.2.3 Economics and Regulatory Limitations ................................ .................... 22 2.2.4 Contribution to the Literature ................................ ................................ ... 23 2.3 The Model ................................ ................................ ................................ ......... 24 2.3.1 EUFLP1: Emissions Tax Regulation ................................ ........................ 25 2.3.2 EUFLP2: Regional Production Emissions Regulation ............................. 27 2.3.3 EUFLP3: Transportation Emissions Regulation ................................ ...... 28 2.4 Analysis ................................ ................................ ................................ ............ 29 2.5 Solution Methodo logy for EUFLP3 ................................ ................................ .... 35 2.6 Realistic Data Set ................................ ................................ ............................. 39 2.7 Results of the Computational Experiments ................................ ....................... 41 2.7.1 The Base Case ................................ ................................ ........................ 41 2.7.2 Fi xed Costs of Capacity Acquisition and Demand ................................ ... 42 ................................ ....... 42 2.7.4 The Effect of Regional Environmental Production Constant ( ) ... 43 2.7.5 The Effect of Regional Production Environmental Penalty ...................... 44 2.7.6 The Effect of Transportation Emissions Constant, Transportation Emissions Limit and Penalty ( , ) ................................ . 45 2.8 Concluding Remarks and Future Directions ................................ ..................... 46
6 3 RESOURCE ALLOCATION OF NONPROFITS: A CASE OF ANIMAL SHELTERS ................................ ................................ ................................ ............. 55 3.1 Motivation ................................ ................................ ................................ ......... 55 3.2 Literature Review ................................ ................................ .............................. 60 3.3 The Model ................................ ................................ ................................ ......... 62 3.3.1 Adoption Guarantee Animal Shelter: M/G/k/k (No Bulk Arrivals, No Priorities) ................................ ................................ ................................ ....... 64 3.3.2 Traditional Animal Shelter: M/G/k/k No Bulk Arrivals, No Priorities) ....... 64 3.4 Performance Comparison ................................ ................................ ................. 65 3.5 Resource Allocation for Adoption Guarantee Shelters ................................ ...... 70 3.5.1 Mean Demand Rate ................................ ................................ ................ 73 3.5.2 Mean Waiting Time ................................ ................................ .................. 78 3.5.3 Mean Rejection Rate and Mean Adoption Rate ................................ ...... 80 3.5.4 Traffic Intensity ................................ ................................ ........................ 83 3.6 Resource Allocation for Traditional Shelters ................................ ..................... 85 3.7 Concluding Remarks ................................ ................................ ......................... 87 4 RESOURCE ALLOCATION OF ANIMAL SHELTERS WITH CAPACITY EXPANSION ................................ ................................ ................................ ........... 98 4.1 Motivation ................................ ................................ ................................ ......... 98 4.2 Realistic Data ................................ ................................ ................................ .... 99 4.3 Model Description ................................ ................................ ........................... 102 4.3.1 Model for Adoption Guarantee Shelters ................................ ................ 102 126.96.36.199 Numerical Experiments for Adoption Guarantee Shelters ............ 103 ................................ ................................ ...... 104 188.8.131.52 The Effect of Adoption Fees, Capacity, and Advertiseme nts on the Reputation ( , ) ................................ ................................ 104 184.108.40.206 The Effect of Reputation and Fundraising on Donations ( ) 105 220.127.116.11 Objective Function Weights ( ) ................................ ..... 106 18.104.22.168 The Effect of Advertisements or Fundraising on the Mean Demand Rate (G, L) ................................ ................................ ............. 107 22.214.171.124 Per Unit Cost of Operating a Primary Care Area (v) .................... 108 126.96.36.199 Summary of the Results for A doption Guarantee Shelter ............ 108 4.3.2 Model for Traditional Shelters ................................ ................................ 110 ................................ ............ 111 188.8.131.52 The Me an Euthanization Rate Plus Mean Rejection Rate Problem ................................ ................................ ................................ 111 4.4.Concluding Remarks ................................ ................................ ....................... 112 5 CONCLUSIONS ................................ ................................ ................................ ... 120 APPENDIX A EUF LP DATA ................................ ................................ ................................ ........ 123
7 B RESOURCE ALLOCATION CALCULATIONS ................................ ..................... 125 Proof of Proposition 3: ................................ ................................ .......................... 125 Adoption Guarantee Shelter Solutions ................................ ................................ .. 126 Mean Demand Rate Problem ................................ ................................ ......... 126 Mean Waiting Time Problem ................................ ................................ .......... 128 Mean Rejection Rate Problem ................................ ................................ ........ 130 Traffic Intensity Problem ................................ ................................ ................. 131 Mean Adoption Rate Problem ................................ ................................ ........ 133 Traditional Shelter Solutions ................................ ................................ ................. 133 Mean Demand Rate Problem ................................ ................................ ......... 133 Mean Waiting Time Problem ................................ ................................ .......... 135 Mean Effective Euthanization Rate Plus Mean Rejection Rate ...................... 137 Traffic Intensity ................................ ................................ ............................... 139 Mean Adoption Rate ................................ ................................ ....................... 142 C CHARACTERISTICS OF THE DERIV ATIVES OF THE BLOCKING PROBABILITY ................................ ................................ ................................ ...... 144 D COMPLETE DATA AND RESULTS ................................ ................................ ...... 145 LIST OF REFERENCES ................................ ................................ ............................. 153 BIOGRAPH ICAL SKETCH ................................ ................................ .......................... 159
8 LIST OF TABLES Table page 2 1 Summary of model parameters ................................ ................................ .......... 50 2 2 Summary of decision variables ................................ ................................ ........... 50 2 3 Model parameter estimates and sources ................................ ............................ 51 2 4 Numerical experiment ranges ................................ ................................ ............ 52 2 5 Performance of the algorithm ................................ ................................ ............. 52 3 1 Notation ................................ ................................ ................................ .............. 91 3 2 Performance measures for adoption guarantee shelters ................................ .... 91 3 3 Performance measures for traditional shelter ................................ .................... 92 3 4 Comparison of Impact Metrics ................................ ................................ ............ 92 3 5 Summary of decision variables ................................ ................................ ........... 92 3 6 Summary of parameters ................................ ................................ ..................... 92 3 7 Summary of adoption guarantee result ................................ ............................... 94 3 7 Continued ................................ ................................ ................................ ........... 95 4 1 Summary of data ................................ ................................ .............................. 116 A 1 Summary of parameter estimates ................................ ................................ ..... 123 D 1 Form 990 data ................................ ................................ ................................ .. 145 D 2 Mean Demand Rate Regression Data ................................ .............................. 145
9 LIST OF FIGURES Figure page 2 1 Non smooth plant co sts ................................ ................................ ..................... 50 2 2 The effect of gamma on network dispersion ................................ ....................... 53 2 3 The effect of on network dispersion ................................ ...................... 5 3 2 4 The effect of on transportation emissions ................................ ............. 54 3 1 Rehoming process at adoption guarantee and traditional shelters ..................... 91 3 2 The mean demand rate problem solution ................................ ........................... 96 3 3 The mean adoption rate problem solution ( is dominant) ............................... 96 3 4 The mean adoption rate problem solution ( is dominant) ............................... 97 3 5 The optimal with approximation objective function comparison .......................... 97 4 1 The effect of advertisements on reputation vs optimal results .......................... 117 4 3 The effect of the we ight of donations ................................ ................................ 118 4 4 Changes in G and the optimal adoption fees ................................ .................... 118 4 5 The weight of donations vs optimal solutions ................................ ................... 119 A 1 Plant size versus emissions ................................ ................................ .............. 123
10 Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy ENVIRONMENTALLY AND SOCIALLY RESPONSIBLE OPERATIONS By Nazli Turken August 2014 Chair: Janice Carrillo Major: Business Administration This dissertation focuses on environmentally and socially responsible operations and is divided into t wo research substreams: (i) environmentally responsible operations; (ii) socially responsible operations. In Chapter 2 , we study the effects of environmen tal regulations on the facility location and capacity acquisition decisions of a company. We extend the traditional facility location and capacity acquisition problems to include regional production and global transportation emissions regulations, and carb on tax. We incorporate civil,criminal penalties as well as clean up costs and injuctive relief as a consequence for violating the regulations. We utilize a realistic data set from the auto industry gleaned from publicly available resources to estimate the parameters of our model. We perform regression analysis to describe the relationship between production size and emissions. The model we present in Chapter 2 is a nonlinear, nonsmooth maximization problem that is k nown to be NP complete. We propose an algorithm to solve this problem by taking advantage of the known discontinuity points. We then explain the strategic supply chain decisions of companies under environmental regulations. In Chapter 3 , we utilize metrics from queuing theory to explain five possible performance
11 measures we have identified for animal shelters. We use M/G/k/k queues with reneging, and without reneging to represent traditional and adoption guarantee shelters. We, then comp are the adoption guarantee and traditional animal shelters in these performance measures. We, then provide optimal resource allocation policies that maximize these performance measures along with the costs. In Chapter 4, we include the capacity expansion as a decision variable for our model and perform numerical experiments using realistic data gleaned from publicly available sources.
12 CHAPTER 1 INTRODUCTION 1.1 Environmental Implications of Strategic Supply Chain Decisions: The Role of Location and Scal e In different regions of the world, several environmental regulations are enforced to push companies into compliance, albeit the cost of monitoring does not allow every company to be monitored. Consequently, it is essential for the environmental regulatio ns to be designed in a way to persuade companies to comply willingly. The first step in designing effective environmental regulations requires understanding the behaviors of companies under different environmental regulations. Most of the current literatur e on emissions reduction has been focused on carbon emissions. In Compensation and Liability Act (CERC LA) and the Superfund Amendments and Reauthorization Act for hazardous substances on the facility location and capacity acquisition decisions of a company. We extend the traditional facility location and capacity acquisition problems to include regional pr oduction and global transportation emissions regulations as well as carbon tax. We incorporate civil and federal penalties as well as clean up costs as a consequence for violating the regulations. We utilize a realistic data set from the auto industry g leaned from publicly available resources to estimate the parameters of our model. We perform regression analysis to describe the relationship between production size and emissions. The model we present in Chapter 2 is a nonlinear and nonsmooth maximizatio n problem that is known to be NP complete. We first show results for special cases and then propose an algorithm to efficiently solve this problem by taking advantage of the
13 known discontinuity points. The Pseudo Facility Linear Estimation Algorithm can be used to solve any facility location where more than one fixed cost occurs as long as the user is aware of the discontinuity points. The results of the realistic computational experiments illustrate that with regards to the regional production environmenta l penalty, an increase in the lump sum dollar amount associated with the penalty is much more effective than a decrease in the actual limit of damage tolerated. These results have implications for the policy maker as well. Constantly reducing the environme ntal limits without increasing the penalties does not force the companies to comply with the regulations. On the contrary, choosing intermediate levels of environmental limits allow the companies to take advantage of economies of scale and avoid the risk o f incurring penalties. The companies can avoid the penalties by dispersing their network and creating small to medium sized plants with small regional but large global environmental impact. As an extension to this paper, we try to identify incentives drivi ng small and medium plants to willful improvement. We study the effects of different environmental technologies on the facility location and capacity acquisition decisions. The preliminary results show that a company will invest in a new technology in all locations or none. 1.2 Resource Allocation for Non Profits: A Case of Animal Shelters Weisbrod (1975) defines non profits as extra governmental providers of collective consumption goods. He explains the existence of non profit organizations as the res p onse to the excess de m and from g ov ernme n tal organizations that can only ser v e to the le v el of the median voter . Non profits differ from for profit organizations in numerous ways: the distribution of wealth, measurement for services provided, and proof of completion of services. For profit organizations distribute wealth among shareholders whereas non profits can u se wealth only to invest
14 in resources for the organization. In addition, non profits receive limited funding making resource allocation a key factor in determining the future of the organization. The services a donor expects from a non profit organization include distribution of goods or collective consumption of goods (i.e. education, food distribution or humane housing of animals) albeit there is not a tangible way to provide proof of completed services for some of these measures. These uncertainties and intangibilities make it challenging to analyze the operations of non profits. metrics from queuing theory to explain five possible performance measures we have identified for a nimal shelters. We use M/G /k/k queues with reneging, and without reneging to represent traditional and adoption guarantee shelters. We, then compare the adoption guarantee and traditional animal shelters in these performance measures using real data. Our r esults show that the traditional shelters perform better in mean rejection rate, mean waiting time, and traffic intensity and perform very poorly in effective mean euthanization rates. We perform numerical analysis to observe how sensitive these conclusion s are to the changes in the parameters. We find that mean adoption rate is the parameter with the most impact on the performance measures. Specifically, when the mean adoption rate is above a threshold, the performance of the adoption guarantee shelter app roaches the performance of the traditional shelter. As a final part of our an alysis, we formulate a mathematical model to find the optimal budget allocation policy for adoption guarantee shelters. 1.3 Resource Allocation of Animal Shelters with Capacity E xpansion In addition to allocating resources to improve performance, non profit organizations also aim to extend their service. To achieve this, they must allocate a
15 portion of their resources to capacity expansion. In the case of animal shelters, the orga nization can improve its capacity by purchasing more primary care areas and care givers. However, it is important to identify the right time and amount of capacity to be added to ensure the continuation of services. In Chapter 4, we incorporate capacity a s a decision variable to the model we presented in Chapter 3. The model we introduce is mixed integer, non linear and known to be NP complete. We perform numerical experiments using realistic data to give recommendations on capacity expansion. We collect d ata from Asilomar Accords, Form 990s, Charity Navigator and shelter websites to estimate the parameters. For the parameter values that are not readily available, we perform regression analyses to estimate their values. Our results show a trade off between advertisements and fundraising activities. When the marginal benefit of advertisements is larger than the marginal benefit of fundraising, the animal shelter should invest all resources to advertisements and vice versa. We also identify scenarios where th e organization should consider investing in capacity. 1.4 Overview of the Dissertation In Chapter environme ntally responsible substream of dissertation. Chapter Allocation of Non Chapter 3 focuses on the socially responsible/continuous help component of my research interests. Chapter 4 incorporates capa city decis ions to the non profit resource allocation problem to study the growth of organizations .
16 CHAPTER 2 ENVIRONMENTAL IMPLICATIONS OF STRATEGIC SUPPLY CHAINS: THE ROLE OF LOCATION AND SCALE 2.1 Motivation The manufacturing strategy area traditionally considers choices concerning a their classic article, Wheelright and Hayes (1985) list capacity decisions (such as the amount, timing and type) as well as facilities deci sions (such as size, location and specialization) as two key structural manufacturing decisions which will ultimately drive lant expansion in relation to the scale of the manufacturing plants under consideration. The benefits of economies of scale in creating larger plants are well documented in terms of increased productivity and lower costs. However, reliance on fewer large p lants increases the costs of transportation, and consequently, emissions. To illustrate, General Motors (GM) recently announced expansion and investment in 17 of its manufacturing plants throughout the United States in response to increased demand, (Terlep 2011). Through a joint partnership in China, GM has also invested in plan t expansion and is planning on building more plants due to the automobile industry are typically based on criteria such as increased demand, labor costs, and exchange ra tes, other environmental concerns are becoming increasingly important in these expansion decisions. According to a GM report on sustainability, GM sell and buy where we build. This practice makes commercial sense, not only for our
17 minimize handling dama ge, preserve natural resources, minimize shipping and use less fossil fuel Sustainability Report (2012)). Another key issue concerns the impact of both formal and informal environmental regulati ons on the plant expansion decision. The regulations on greenhouse gases can be divided into two types: carbon tax and cap and trade. In this paper we focus on the carbon tax policy, which involves a penalty for every ton of carbon emitted by the company. These regulations are mostly implemented in the 34 countries that are in the Organization for Economic Cooperation and Development (OECD), which includes some states in the U.S. as well as many European countries. The cap and trade (also known as emission s trading) program was introduced in the U.S by the Acid Rain Program associated with the 1990 Clean Air Act. The cap and trade program is essentially a market based tool to reduce emissions. The firms in this program receive emissions allowances, with a The firms then sell or purchase allowances to meet the overall limit. Several other regions use this policy for emissions reduction, especially the European Union. In addition to greenhouse gas regu lations, a hazardous substance regulation (CERCLA) is implemented in the U.S. The Environmental Protection Agency (EPA) releases a list which includes reportable quantities for several hazardous substances. If a company exceeds this reported quantity, the n the company has to report their emissions annually, and if found to be non compliant the company must recompense for
18 the violation. There are four kinds of penalties in the U.S. including civil, criminal, cleanup and federal facilities penalties. The civ il and criminal penalties create an upper bound on the amount of penalty charged to a company per day per item. The cleanup penalty depends on the environmental damage induced by the emissions. In addition, injunctive relief, a part of the civil penalties, requires any non compliant company to bring their facilities into compliance. In the years from 2007 to 2011, over 170 cases of criminal enforcement activities for the environmental regulations were reported. Among these cases, several companies were su bjected to fines and restitution up to $370M, (EPA NETs 2012). Once a company violates an environmental regulation, there are several factors driving the magnitude of the penalties that it must pay, including the following: degree of willfulness or neglige nce, history of noncompliance, ability to pay, degree of cooperation and other factors that are specific to the case. These fines could have been avoided if the company had considered environmental regulations in advance within their manufacturing network . To illustrate, the company can consider investing in environmental damage abatement technologies or redesigning their network to bring their facilities into compliance. More recently, GM has issued statements confirming an updated policy towards environ mental regulations. with all governmental entities for the development of technically sound and financially ort, 2012). In addition to the production emissions, some companies are reconsidering the environmental effects of their facility network design due to transportation considerations. Ocean Spray recently redesigned their distribution network and opened
19 a n ew plant in Florida reducing their emissions from transportation by 20% (Cheeseman, 2013). In 2008, Unilever created its own internal transport management organization called Ultralogistik with a goal of carbon emissions reductions through network redesign . Specifically, Unilever constructed regional distribution hubs to reduce the total distance travelled in Europe by 175 million kilometers (Unilever.com, 2013). In this paper, we analyze the supply chain design decisions for a multi plant manufacturing network taking into account the environmental impact as well as the transportation and production costs. We study the location and capacity decisions under the lens of contemporary environmental considerations. We explicitly model the trade off between ec onomies of scale with regards to manufacturing in large plants along with the environmental impact due to emissions and regulations. In addition, we identify conditions under which a dispersed manufacturing network is appropriate as opposed to a manufactur ing network with larger centralized plants. In particular, we address the following research questions: What are the trade offs with regards to plant size between economies of scale and environmental implications? How do national and regional environmenta l regulations impact on plant size and location decisions? How should a firm configure its manufacturing network in response to changes in transportation costs, plant size and environmental concerns? Under what circumstances is a dispersed manufacturing plant strategy appropriate? Under what circumstances is a single large centralized plant appropriate?
20 Can solutions for the environmental facility location problem be identified in an economical manner? This paper is organized as follows: Section 2 .2 highl ights the contemporary literature on capacity expansion and facility location problems along with the environmental considerations. In Section 2. 3, we develop a detailed model, which guides a firm with its capacity expansion and scale choices, given carbon emissions taxes, regional regulations on production emissions and transportation emissions regulations. We also incorporate transportation decisions into the model to explicitly capture the trade off between plant size and the dispersion of the supply net work. In Section 2. 4, we provide some analytical results for some special cases. In Section 2. 5, we analyze the model which is, nonlinear integer and discontinuous with concave production costs and introduce an algorithm that solves the problem by taking a dvantage of known discontinuity points. In Section 2. 6, we describe the realistic data set that constitutes the basis of our computational experiments that we report on in Section 2. 7. In Section 2. 8, we offer managerial insights into the results obtained from the computational experiments, and provide future directions for research. 2.2 Literature Review During the course of conducting the study presented in this paper, we utilize three streams of re search, which we briefly discuss . In particular, the lite rature on (i) facility location and capacity acquisition decisions, (ii) the incorporation of environmental issues in mathematical models, and (iii) environmental regulations are all quite relevant to this work. In closing, we position the paper in the con text of these three streams of literature and reiterate our contribution to this literature
21 2.2.1 Facility Location and Capacity Acquisition Literature Strategic supply chain decisions including capacity expansion and facility location problems have been s tudied since the 1960s. Manne (1961) was amongst the first to study capacity expansion with probabilistic growth. One of the most basic representations of the facility location problem is the uncapacitated facility location (UFLP). Efroymson and Ray (1966) analyze the single period plant location problem where the plant costs are piecewise linear concave. They introduce a Branch and Bound method to solve their version of the UFLP. Verter and Dincer (1992) note that single period UFL and capacity acquisitio n decisions (CAP) had typically been handled separately in the literature, and they provide a model integrating the UFL and CAP decisions. Verter and Dincer (1995) base their solution on the Dualoc algorithm by Erlenkotter (1978) along with a progressive l inear approximation of the continuous concave plant costs. In addition, Verter and Dincer (1995) identify the conditional dominance property, whereby each market will be fully served by a dominant facility (full server). For a summary of the related lite rature on facility location problems, see Melo et al. (2009). 2.2.2 Contemporary Environmental Models in Operations Research More recent operations literature calls for research which incorporates (Angell and Klassen, 1999, Dangayach and Deshmukh 2001, and Corbett and Klassen, 2006). Angell and Klassen (1999) identify two different perspectives on how the environment influences operations management via component resources or operating constraints. These authors also
22 (2001) offer a thorough review of the manufacturing strategy lit erature categorizing each paper by content, methodology and outlet. They also identify papers concerning both environmental issues and manufacturing strategy and note that more research is needed which connects these two important areas. Corbett and Klasse n (2006) also link environmental and operations strategy via a resource based view of the firm by Corbett et al. (1995) use mathematical programming to optimally allocate resources for the decontamination of polluted sites by utilizing a quantitative environmental measure previously developed by Jacobse and Wolbert (1988). Kraft et al. (2013) analyze the impact of regulatory uncertainty concerning certain potentially (1992) develop a model which is equivalent to a nonlinear program in which they represent the environmental damage with carbon emissions. Klassen and Vachon (2003) utilize a survey data from several Canadian manufacturing plants and find that supply chain collaboration significantly affects the level and form of investment in environmental technologies. Diabat and Sim chi Levi (2009) utilize mixed integer programming (MIP) to study the supply chain network problem with a carbon emissions tax. 2.2.3 Economics and Regulatory Limitations While contemporary considerations of carbon emissions seem to dominate the popular p resources have existed for decades (Pashigian, 1984). A body of literature within the
23 manuf acturing plants. To illustrate, Snir (2001) focuses on product stewardship, which used to ensure that the company is in compliance with the regulations and to identify the liabilities the company will face in case of noncompliance. A body of prior literature points to the limitations of such regulatory environments on total manufacturing c apacity at the aggregate level (Gray and Shadbegian, 1993), while some of the literature finds little or no significant negative impact on these capacity investments (Shadbegian and Gray, 2005). Furthermore, such regulations can occur at the federal, stat e or community level. More recently, Chen and Monahan (2010) highlight the role of informal or voluntary regulations within certain communities and firms in moderating environmental impact associated with classic operations decisions. 2.2.4 Contribution t o the Literature totaled over $125 billion per year, a level that represented more than 2% gross national hat the effects of pollution control were in both political and economic in nature. Thus, it is vital for a firm to incorporate environmental issues into its strategic decision making concerning its supply chain. We extend the traditional capacity acquisi tion literature by incorporating the impact of both production emissions limitations and transportation emission constraints imposed by regional governments. The aforementioned model is a discontinuous, non linear integer minimization problem which is kn own to be NP complete. The Pseudo Facility Linear Approximation Algorithm can be applied to any
24 facility location problem with numerous fixed costs as long as the user is aware of the disco ntinuity points. We propose an algorithm that takes advantage of th e known discontinuity points. In addition, we solve limited versions of the model to gain deeper theoretical insights concerning the optimal solutions to the problem. Finally, we utilize information from several auto companies within the U.S. to illustrat e numerical examples of the model distinguishing the circumstances under which a dispersed or centralized plant network is optimal. Our results show the impact of various regulatory schemes (such as target emissions levels and penalties) on the optimal pl ant network. By modeling the effects of regulations on the facility location decisions, we can identify the environmental limits and penalties that will drive the company to compliance. We specifically find that creating stricter regulations without high penalties will not assure compliance. Furthermore, we find scenarios where high polluting companies will be non compliant even if the aforementioned levels of limits and penalties exist. In addition, we identify the magnitude of environmental damage a comp any causes by choosing a dispersed or centralized facility location layout. 2.3 The Model In a traditional facility location and capacity acquisition problem there are fixed and variable costs of building at each location. The fixed location cost allows us to capture the differences in construction costs between countries and regions. The varia ble location cost represents the equipment and other costs related to building a new facility. We model the demand as deterministic assuming that the short term variability in demand is insignificant. The demand at each location differs and can be satisfi ed through production from any of the regions, and a cost is incurred for transporting products between two regions. In addition to the fixed cost of establishing a
25 new facility in any region, we incorporate the capacity acquisition and production cost wit h a general function including concave elements. Specifically, economies of scale in . The notations used are shown in Table 2 1 and Table 2 2 . : In this paper, we consider emissions (carbon) tax, regional production regulations and transportation regulations. We introduce our model in three steps: the first model (EUFLP1) is a UFLP model with emissions tax considered. The second model (EUFLP2) is a nonsmooth, nonlinear integer problem with concave production costs incorporating emissions tax, and regional production emissions limit and penalties. In EUFLP3, we finally incorporate these two elements, and the transportation emissions and penalties int o the model. In Section 2.4 , we show analytical results for some special cases for EUFLP1 and EUFLP2. Our complete model (EUFLP3) is known to be NP complete, thus, in Section 2. 5 we propose an algorithm to solve it. 2.3.1 EUFLP1: Emissions Tax Regulation In our model instead of limiting the tax policy to only carbon emissions we use the term emissions tax to include all hazardous substances from production. We incorporate emissions tax into our model as a part of the variable cost, assuming a linear relationship between production size and emissions ( . To motivate our modeling choice with regards to the relationship between production size and emissions, we utilize linear and exponential regression analysis to support our linearity assu mption. We gathered the production/sales rates from various resources and total hazardous chemical releases from EPA Toxic Release Inventory for 5 different auto manufacturing companies between the years of 2005 2010, and found an R squared
26 value of .763 3 for linear versus an R squared value of 0.5722 for exponential. There are several different hazardous substances that are released to the environment but to evaluate the overall effect we summed up the releases for all substances for each company from al l plant locations in U.S. The R squared values shows that linear is a better fit for the relationship between production and emissions. (Refer to Table A 1 and Figure A 1 in the Appendix for the data and regression graphs.) The objective function of EUFLP1 can be written as follows: Subject to the following constraints: 1. The binary variable indicating an expansion must be equal to 1 if there is any produ ction at location i. 2. The demand at each region is satisfied . 3. The constraints defining the ranges of the variables The first and second terms are the fixed and variable costs of capacity acquisition and emissions tax. The third term is the cost of production with economies of scale. Finally, the fourth term is the total cost of transporting between locations. Constrain t sets (1) and (2) are similar to those typically associated with UFLP problem. Specifically, the demand at each location can be fulfilled from local production or with
27 the excess supply transferred from other regions. Constraint set (3) delineates the fe asible range for the descriptive and binary variables. EUFLP1 is a classical UFLP with emissions tax included in the variable capacity acquisition cost. In the next step, in EUFLP2, we introduce the regional production environmental regulations and penalti es. 2.3.2 EUFLP2: Regional Production Emissions Regulation We introduce environmental constants for production emissions ( ) to reflect the percentage of waste from production activities. Similar to EUFLP1, we assume a linear relationship betwee n production and emissions. These environmental constants will change depending on the industry and the environmental technology adopted by the company. In addition, environmental limits/regulations are typically established at a local level and can vary widely between different regions of the world. These different types of environmental requirements are captured in our model by an environmental constraint. If the production emissions in a region exceed , then the binary indicator for nonco i, equals 1 and the company pays environmental penalties. We allow for both a fixed penalty associated with production emissions as well as a variable penalty based on the magnitude of the violation. The objective function for E UFLP2 with regional production emissions becomes: Subject to constraints (1), (2), (3) and (4).
28 In constraint (4), we introduce the combined term as the pollution residuals: 4. The binary variable indicating a violation of a regional environmental regulation must be equal to 1 when the emissions from production in region i exceeds the limit. The penalties issued also vary between different regions of the world. In our model we use the penalty example utilized by the EPA which includes several facto rs such as civil and criminal monetary penalties as well as a Superfund penalty. The fourth and fifth terms in the previous objective function are the costs of violating the environmental limits and are paid by the company as a lump sum of , and a vari able cost per unit for each region the environmental regulations that is violated. The fixed penalty can be interpreted as the civil or criminal penalty enforced. The variable penalty reflects the clean up costs associated with emissions as well as th e necessary investments to bring the operations to a level that complies with environmental regulations. Production regulations are not the only concern, as 28% of the greenhouse gas emissions in the U.S. result from transportation. As a third step, in EUF LP3, we introduce the transportation emissions regulations and penalty. (EPA.GOV) 2.3.3 EUFLP3 : Transportation Emissions Regulation We capture the trans portation emissions through the parameters . These parameters change depending on the distances between the regions and the available technology in the producing region. We assume that there is a one time penalty for
29 violating a transportation emissions regulation. The objective function and the constraints for EUFLP3 is shown below: Subject to constraints (1), (2), (3), (4) and (5). 5. T he binary variable indicating a violation of a transportation emissions regulation must be equal to 1 when the total emissions from trucks exceed the specified limit. ; 2.4 Analys is In Section 2.4 , we present our analytical findings for the special cases of EUFLP1 and EUFLP2 to develop key concepts associated with the problem. We first highlight the similarities between UFLP and EUFLP1, and utilize an important property of UFLP to gain further insights. Then we show the scenarios where a single noncompliant facility dominates solutions with multiple compliant facilities for EUFLP2. We also analyze the impact of the environmental constraints on the optimal solutions, for the special case where all other cost parameters are similar. Finally, we highlight the situation where an increase in the environmental constant reduces the plant size and increases the number of plants. EUFLP1, the environmental facility location problem with emis sions tax, is a classical UFLP model and the conditional dominance property defined in (Verter and Dincer, 1995) still holds. According to this property, each market will be fully served by a dominant facility (full server) that varies with demand given th at all other parameters
30 remain the same. From this result, the demand at market j will be fully satisfied by the plant location that can provide the lowest cost. The effect of emissions or carbon tax on s demand is similar to the effect of any other linear variable costs on the decision. A region with lower emissions tax is not necessarily dominant over a region with higher emissions tax, as the optimal decision is dependent on the total cost including fi xed and variable costs of capacity, production and transportation costs and the emissions tax. In the case of symmetric plant locations (i.e. all of the costs and distances to market j are the same for each supply location) with location A having a lower e missions tax, market j will be fully served from location A. However, the symmetric locations scenario does not exist in reality and the reason companies move their production to lower emissions (carbon) tax regions is not solely due to low emissions taxes but also the benefits of lower corporate taxes and wages and relaxed regulations emerging in new markets. The lower overall cost is the factor which drives the companies to less regulated regions which is also supported by the conditional dominance proper ty. In addition to the emissions tax, many regions in the world employ command and control type regulations. EUFLP2 incorporates the regional production emissions regulations which create a soft capacitated facility location problem and the conditional dom inance property no holder holds. In a soft capacitated facility location problem, an extra cost is incurred for opening a facility larger than the allowed capacity limit. Similar to the capacitated facility location problem, the optimal solution follows th at of an extreme flow pattern. In the capacitated facility location problem, this extreme flow solution contains at most one flow amount between the upper and lower bounds. Thus,
31 the demand at each market will be served by either: (i) 1 facility that fully satisfies the demand (full server), (ii) Q facilities at their capacity limit sizes that partially satisfy the demand (partial servers), or (iii) Q 1 partial servers and 1 remainder server, where the remainder server is not at its capacity. In the remaind er of Section 2.4 we will abbreviate the compliant and non compliant facilities as CF and NCF and the remainder compliant server as RCF. Furthermore, we define symmetric locations for market j as locations with the same costs, scale parameters, emissions l imits and penalties as well as the same distances to market j. Before we complete our analysis we introduce the following mathematical notations: , = the size of the facility at location i. We choose symmetric location s to illustrate how the results of EUFLP2 and EUFLP3 differ from the traditional UFLP. Proposition 1: For the EUFLP2 problem, in the case of symmetric locations, it is less expensive to open a single non compliant facility than q non compliant and Q q comp liant facilities to serve demand at location j if . compliant full server, , and q noncompliant out of Q total servers, , are given below: We omit the subscripts i,j when unnecessary.
32 In the case of symmetric locations, if the difference in environmental penalties are higher compared to the fixed costs of capacity acquisition and the benefit from economies of scale, a single noncompliant facility is better than q noncompliant facilities . Proposition 2: (a) For the EUFLP2 problem when : In the case of symmetric locations, market j will be served by a non compliant full server instead of Q compliant partial servers if , and by Q compliant partial servers, otherwise. (b) For the EUFLP2 problem when : In the case of symmetric locations, market j will be served by a non compliant full server instead of Q 1 compliant partial servers plus 1 c ompliant remainder server if ) and by Q 1 compliant partial servers plus 1 compliant remainder servers, otherwise. Proof: The cost functions with symmetric locations for Q compliant partial servers are show n below: The cost function for Q 1 compliant partial servers and 1 remainder server is: Let Q* + R=
33 When the fixed costs of building Q 1 extra facilities minus the loss from economies of scale for opening smaller facilities exceed the total environmental penalty paid by the company, it is optimal to have a single full noncompliant server. The company will pay civil/criminal and cleanup penalties and will bring their operations up to compliance. Otherwise, the company will open Q compliant fac ilities and not incur the penalties. A similar scenario holds for RCF. Proposition 3: In the case of symmetric locations with location B having less strict environmental regulations, the optimal solution will include a facility in location B. Proof: We kn ow from proposition 2 that the demand at market j will be satisfied in one of the three forms. Given that the optimal solution is a single facility, the demand location j will be served from the facility with the lowest cost. Let where is the objective function value if the noncompliant facility is opened in location i. All costs other than ar e not affected by , for all i not B. Given the optimal solution is Q partial servers, the optimal solution will be the combination of Q locations that give the minimum cost. Given that all costs are the
34 same we can write for all t not B where i . is the set of Q facility combinations and is a combination containing location B. We can show similarly that Proposition 4: In the case of symmetric locations, decreases with an increase in if . Proof: Let , and Q* = . We can write if , is decreasing in Q. As the company becomes more polluting, it is more likely to open a single noncompliant facility. We proved the conditions under which a single noncompliant facility becomes optimal in Proposition 2. This condition strengthens when is increasi ng. Note that this result mirrors empirical evidence in the industrial ecology literature. In his empirical paper in 1984, Pashigian showed that the number of facilities decreased and facility sizes increased with environmental regulations.
35 2.5 Solution Methodology for EUFLP3 The EUFLP3 model presented in Section 3 constitutes an extension of the Capacitated Facility Location and Capacity Acquisition Problem (CFL&CAP) studied in Verter and Dincer (1995). From an analytical perspective, the primary differe nce is the penalties to be paid when the company exceeds the regional emission limits and aggregate transportation emission limits. These penalties result in discontinuities in the total costs of capacity acquired at eac h plant, as depicted in Figure 2 1 . Furthermore, the environmental regulations represented in constraints (3) and (4) act as capacity limitations on the total amount of goods produced by a particular facility. Consequently, the dominant facility property presented in Verter and Dincer (199 5) no longer holds. The problem is clearly NP complete. Holmberg (1994) studied a similar problem with stepwise linear facility costs. He linearization technique, with or w ithout improvements, is a very interesting approach for according to the plant locations. The unit price, the fixed cost, the environmental penalty, variable expansion and production costs and transportation costs are facility separable. Similar to Verter and Dincer (1995) and Holmberg (1994), the above plant cost function can be approximated by two linear segments. Each linear segment represents a pseudo facility th at is related to the plant and is defined by the regional production emissions limit. The first pseudo facility represents the case where the company complies with the limits, and the second pseudo facility is when the company decides to incur the penaltie s. The upper bound, on the first pseudo facility is the production
36 size that the environmental limit allows and the lower bound, is zero. The lower bound of the second pseudo facility, is the upper bound of the first pseudo f acility plus one. The upper bound for pseudo facility two, is the total demand. Once the pseudo facilities are created, the updated model reduces to the IP below. We can reformulate the problem utilizing the piecewise linearization method. Le t represent the number of pseudo facilities or linear segments, and be the end point of linear segment k, where . The cost at linear segment k is and TD= . Initially all variables are set to two and the penalties for first pseudo facilities will be set to zero. Min Z 4 = Subject to the following constraints: 1. The binary variable indicating an expansion should be 1 if there is an expansion at location i with pseudo facility k. 2. The pseudo facilities are capacitated. 3. There can only be one pseudo facility opened at region i. 4. The binary variable indicating a violation of a transportation emissions regulation must be equal to 1 when the total emissions exceed the specifie d limit.
37 5. The demand at each region is satisfied. 6. The constraints defining the ranges of the variables. Using the aforementioned properties of EUFLP3, we propose the following algorithm: Pseudo Facility Linear Estimation Algorithm Step 1. Initialize Input: For i , get data Set : TD= , , , and , , , , , , , Set , number of pseudo facilities, to 2 for each i. Step 2. Solve the Updated IP using branch and bound For i obtain the optimal values for and
38 Set , , , Step 3. Improve the approximation For For , If then separate this range into two new pseudo facilities. , and , , and , and , Update the costs and bounds accordingly. Step 4. Terminate If no new pseudo facility is generated. Set , , , Report and as optimal. Else Go to step 2. This algorithm can be used to solve any facility location problem where more than one fixed cost occurs as long as the user is aware of the discontinuity points. In addition, if no global constraints are present in the UFLP with discontinuities, a CFLP solver can be utilized instead of an IP solver which will reduce the computation time.
39 2.6 Realistic Data Set Section 2.6 , we outline the realistic data set we assembled as a basis of the computational ex periments to be discussed in Section 2.7 . To motivate our examples, we collected data from several resources for the automotive industry. Table 2 3 shows the different parameters and data sources that were utilized to estimate param eter values for the model. We gathered financial information from 10 K and 20 Fs of 5 major car manufacturers, including GM, Ford, Toyota, Honda and Hyundai. Please refer to Table A 1 for the complete data. While most of the estimate calculations were str aightforward, the estimates for the transportation and production emissions constraints warrant further explanation. The transportation emissions are obtained from the U.S. Energy Information Administration (EID). The data given for carbon dioxide, methan e or nitrous oxide is given in kilograms per gallon of fuel used or per miles. In order to complete the calculations concerning transportation emissions, we also utilize a simple average miles per gallon estimate. According to the data given on city data. com and eia.gov, the car carrier trucks travel an average 8 miles per gallon and release 10.15 kilograms of carbon dioxide per gallon. Thus, on average 1.27 kilograms of carbon dioxide per mile is released into the atmosphere per truck. Note that a full c ar carrier typically holds 10 cars. Consequently, we estimate the carbon dioxide emitted per car as .278 pounds/car mile. The emissions data for nitrous oxide and methane are similarly pounds/car mile and pounds/mile. There are several standards for transportation emissions such as the ones established by the EPA and National Highway Traffic Safety Administration (NHFSA).
40 However, these emission standards are set to regulate the per vehicle emissions rather than the lon g term effects of the gases that are released to the environment. As the focus of our model is on the total environmental impact of a company, therefore an environmental limit that incorporates the frequency of the shipments and the distance traveled need s to be incorporated. Thus, we utilize the limits provided in the National Enforcement Trends and Case Statutes published by the EPA. These values are used as an upper bound for the penalty the company would have to pay for transportation emissions violati ons. In order to find the cumulative transportation emissions in each region, we multiply the distances by the number of cars that are shipped from the production region to the receiving region and the pounds of greenhouse gases emitted per car mile which results in the total pounds of emissions. Next, we estimate the parameters associated with plant level emissions. The environmental penalty data, the environmental constant and the limit are estimated using www.transportreviews.com, Environmental Protec Settlements and Toxic Release Inventory (TRI) reports from the EPA, respectively. The total release data which is the sum of all the chemica ls released. In estimating the regional production limits, we base the calculations on the amount of pollutants enforcement trends. (See Table A 1 in Appendix for the complete dataset.) In order to estima te the distances, we divided the Contiguous U nited States into twenty regions , and calculated the distances to and from each of the region center.
41 For the numerical experiments, we identified a base case from the range of values that were established for th e parameters as shown in Table 2 4 . These values were chosen so that the base case results in a centralized solution where the effect of economies of scale is dominant. This allows us to pinpoint the scenarios that make the effect of environmental regul ations and penalties dominant. Specifically, the environmental limits have no effect on the optimal network structure and all demand is satisfied from a single facility in the base case. 2.7 Results of the Computational Experiments The instances of the linearized model are then solved using Matlab 2010a/Matlab2012b. We first outline the results for the Ba se Case. The remainder of Section 2.7 focuses on sensitivity analysis of the optimal solution to changes in certain model parameters. Table 2 5 shows t he performance of the algorithm compared to Lingo 13. 2.7.1 The Base Case In the base case with 20 symmetric regions (i.e. where the costs, regulations and penalties are the same in every region), the company opens one plant that satisfies the demand in every region. This plant exceeds both the regional production environmental limits and the transportation emissions limit. The economies of scale effect is high enough that the company realizes value from opening one plant and then pays the penalties. This is the case of a single noncompliant facility location solution similar to that found in Proposition 2. Moreover, these results are robust when we vary both the variable capacity expansion costs and also the variable manufacturing costs within a wide ran ge. Note that an emissions tax policy is essentially an increase in variable capacity acquisition costs. If a symmetric carbon tax exists in all regions, then this
42 policy does not have an impact on the network dispersion. This result is consistent with th e conditional dominance property stating that the overall cost is more effective in facilities decisions than the emissions tax. 2.7.2 Fixed Costs of Capacity Acquisition and Demand In the classical UFLP case, an increase in the fixed costs translates into a more centralized network if the transportation costs are negligible. The EUFLP3 behaves similar to the UFLP case, however the size of the plants and the fixed cost threshold that drives the company to centralization are influenced by the environmental r egulations. Specifically, as the fixed costs increase, there are thresholds whereby the limits associated with transportation and production emissions drive the solution. The values for demand at all plants were varied between 50,000 and 1,000,000 units per region. In EUFLP3, high demand translates to a higher possibility of environmental penalties. Thus, when the demand and fixed costs are both high, it is optimal for the company to have a centralized network. In general, when fixed costs are low, the c ompany disperses its network as demand increases until the demand reaches a threshold. Above this threshold, the company can never comply with the regulations, thus it decides to incur the penalties with less plants. To summarize, high demand translates t o a centralized network regardless of whether the fixed costs are low or high. 2.7.3 0.9 which provided enough economies of sca le to drive the company to violate both environmental limits and incur penalties with a single plant. Note that while a small value is associated with lower economies of scale, it also indicates reduced unit
43 production costs. Thus, even with small size d plants the company can benefit from economies of scale allowing compliance by dispersion to be feasible. In order to fully between 100,000 lbs and 600 ,000 lbs. From Figure 2 3, when low or high, is not affected by economies of scale. The variable only for the intermediate values of The threshold is reached. From Figure 2 2 approximately .6. As a benchmark, Manne (1967) comments that a 0.65 value is appropriate for many manufacturing firm s, thereby placing most companies above the capacity acquisition plan particularly if the environmental limits are kept at an intermediate level . 2.7.4 The Effect of Re gional Environmental Production Constant ( ) For this experiment, we vary the regional environmental production constant between 0.0 and 0.5, and the results are shown in Figure 2 3 . Recall that reflects the percentage of production which contributes to environmental waste such that a high value represents a high polluting company. For both low and high transportation costs, a high value (specifically higher than 0.25) causes the company to open one single plant and incur both penalties. This result is consistent with Proposition 4 and Pashigian (1984), as with an increase in , the single facility location solution becomes more likely as it is harder for the company to comply with the regulations in every region. However, our results emphasize the importance of
44 considering transportation costs concurrently with production pollution when determining an appropriate plant network strategy. When transportation costs and regional production environmental constants are low, the company should centralize its network to avoid the costs associated with excessive production penalties. This result is important because it indicates that low polluting companies with low er transportation costs have more flexibility in establishing their plant network to minimize total costs. Conversely, companies with large transportation costs and low production waste disperse their network when production emissions are low. 2.7.5 The Effect of Regional Production Environmental Penalty The combination of pollution contribution ( ), the fixed penalty charge ( ), the variable penalty ( ), and the pollution threshold ( ) together determine the gravity of the environmental penalty. We varied between $250M and $5000M for alternate values of the pollution contribution . Note that the upper bound for the environmental penalty according to the data we have collected is $2500M. However, we extend ed our analysis to pinpoint the environmental penalty that fully disperses the network. In particular, if the fixed penalty is below a threshold, then a single plant solution is dominant for all values of . As this penalty increases, then a more dispersed network of plants is optimal. Moreover, higher values of are associated with a greater number of plants when the company disperses the network. Recall that the variable regional environmental penalty ( ) is an additiona l cost incurred for the number of units produced over the regional limitation. As the variable regional penalty increases, the plant network becomes more dispersed depending on the provided pollution threshold . When the pollution thresho ld is high,
45 then the firm always chooses a centralized solution regardless of the variable environmental penalty. Therefore, the effect of the variable regional environmental penalty is moderated through the pollution threshold . In summary, it appears that than the environmental limit itself. The key managerial insight is that as long as the environmental penalties are greater than the fixed costs of building Q 1 extra facilities minus the loss from economies of scale for opening smaller facilities, then the penalties firms. 2.7.6 The Effect of Transportat ion Emissions Constant, Transportation Emissions Limit and Penalty ( , ) Similar to the regional production constant and limit, the transportation emissions and the transportation emissions limit determine when the company will incur the transportation emissions penalty . When is lower than a threshold, the transportation emissions penalty has a significant effect on the network dispersion. However, above this threshold, the company but more by and . Figure 2 4 illustrates the impact of increases in the transportation emissions limit on the transportation emissions released. From these numerical results, an increase in the transportation emission penalty is associated with a decrease in emissions, particularly w hen the transportation emissions limit is at an intermediate level. Note that intermediate values for the transportation limit drive the company to minimize its transportation emissions. In this range, it is feasible for the company to comply with the transportation regulations by designing a dispersed network. For lower values, the firm
46 incurs the penalty and centralizes its network. For higher values, the transportation limit There is also an interaction between the regional pollution thresholds and the global transportation emissions threshold In general, when is low for all of the regions, it is optimal to open a single plant and incur all of the penalties under low transportation costs, regardless of the transportation limitations. When and are both at intermediate levels, the company disperses the network. Therefore, the transportation emissions regulation s are most effective when there are intermediate values for the transportation emission limits accompanied by a high tran sportation emissions penalty. 2.8 Conclu ding Remarks and Future Directions While much of the current literature has focused on emissi ons from a single plant, we analyze the impact that production and emissions regulations have on a the environmental regulations are not taken into consideration th ere is a trade off between transportation costs and fixed costs. For these problems, a demand region is the impact of emissions taxes, regional production level en vironmental limits and global transportation emissions regulations. Essentially, we find that when the global limit on transportation emissions is relatively low, then a more dispersed production network is optimal. As for the variable capacity expansion or variable manufacturing costs, we find that they do not have a significant effect on the network dispersion for the realistic
47 instances. An immediate consequence of this result is that having a symmetric carbon tax policy (i.e. in all regions simultane ously) will not reduce regional production environmental emissions. In this situation, the company can simply increase prices to compensate for the carbon (emissions) tax. If a company has very high demand in each region such that the dispersion efforts w ill not make the company compliant, the company is better off centralizing its network. We find a demand threshold for each company where the company conforms by considering new technologies (greener technologies during the construction phase rather than l ater) rather than dispersing its production network. This option has not been incorporated into our model but can be considered as an extension in the future. In general, high total demand translates to a centralized network. Intuitively, a low regional production environmental limit should force the company into compliance. However, numerical results illustrate that with regards to the regional production environmental penalty, an increase in the lump sum dollar amount associated with the penalty is muc h more effective than a decrease in the actual limit of damage tolerated. When non compliance becomes costly with a large fixed penalty, both the regional production environmental damage and the global transportation emissions are reduced. When the enviro nmental penalties are high and the environmental limits are at intermediate values, the company will disperse its network even if the benefit from economies of scale is great. If the civil/criminal penalties are high, the companies lose profits even if the y are not paying the penalties. In order to comply with the regulations the company disperses its network which reduces the benefits of economies of scale.
48 Economies of scale, the primary driver of network centralization in UFLP, is not as dominant for man y scenarios in the EUFLP3 case. An industry or a company with high pollution production is better off having a single large plant and trying to invest in environmental abatement policies or incurring the civil penalties and injunctive relief. However, a company or an industry with low pollution production is better off dispersing the network so it is not incurring penalties but also opening the largest possible plants that comply with the environmental limits. The same scenario holds for the transportati on emissions penalty. If the company has a highly polluting fleet, the optimal decision is to have a centralized network. The company is incurring the penalty even if the network is dispersed. In the intermediate cases, the company should consider disper sing the network to reduce the transportation emissions. These results have implications for policy makers as well. Constantly reducing the environmental limits without increasing the penalties does not force the companies to comply with the regulations. O n the contrary, choosing intermediate levels of environmental limits allow the companies to take advantage of economies of scale and avoid the risk of incurring penalties. If the companies can still benefit from economies of scale without having to invest in costly abatement policies, they will try to disperse their network and comply with the regulations. This type of compliance by dispersion reduces the regional production environmental damage and the transportation emissions. By setting the regional prod uction limit and penalties at optimal values, a national standard for environmental damage could be achieved. Dispersion will create small and medium sized plants that do not have high individual environmental impact.
49 To summarize, in order to reduce the regional production environmental damage and transportation emissions the policy makers should choose intermediate limits but high penalties. Furthermore, the companies with low and medium pollution should consider dispersing their network to avoid penalti es and reduce their costs. The companies with high pollution should resort to other resources for compliance or take the risk of being penalized. As future research in this area, the total effect of small and medium sized plants can be considered. The sma ll and medium sized plants have small individual environmental impact but in total their impact can be large. In addition, the effect of different environmental abatement technology options on facility loca tion problems can be explored.
50 Figure 2 1. Non smooth plant costs Table 2 1 . Summary of model parameters Model P arameters Number of possible plant locations Number of markets Fixed cost of location i Unit costs at location i Unit costs of expansion at location i Unit emissions tax Unit cost of production at location i Demand at location i En vironmental limit at location i One time penalty of going over the regional environmental limit The unit cost of transportation between lo cations i and j (includes trade tariffs) Regional environmental constant between 0 and 1 Transportation emissions constant for transferring items between locat ions i and j The constant for economies of scale at location i The distance between location i and j The transportation emissions penalty The transportation emissions limit The unit penalty for going over the regional environmental limit Total Demand Table 2 2 . Summary of decision variables Decision V ariables
51 1 if any new facility is established at location i, otherwise 0 1 if the regional environmental limit is violated, otherwise 0 The number of units produced in location i for location j The total number of units produced in location i 1 if transportation emission limit is violated, otherwise 0 Table 2 3 . Model parameter estimates and s ources P arameter Description of Data Source Capital Expenditure (NET PP&E Capital Expenditure)/Output Output or Sales EPA threshold for the given toxic materials Civil penalty paid by the company/Estimated Value of Complying Actions Shipping estimates Total Releases/ Output Emissions in g/mile Expenses/Output Revenue/Output EPA Penalties/Output Yahoo! Finance 10K or 20F Yahoo! Finance 10K or 20F Market Share Report er, EPA Toxic Release Inventory National Enforcement Trends (EPA) Transport Reviews website Toxic Release Inventory Eia.gov or city data.com Yahoo! Finance 10K or 20F Yahoo! Finance 10K or 20F National Enforcement Trends(EPA)
52 Table 2 4 . Numerical experiment ranges Parameter Range Base Case Fixed Costs(mil$) 595 14,325 1,000 Variable Manufacturing Costs($1000s) 12.600 61.695 18 Variable Expansion Costs($1000s) 3.420 15.117 6 Transportation Costs($ per car per mile) .02 .1.5 0.06 Demand(units) 50,000 17,500,000 300,000 Regional Environmental Percent 0 0.95 0.1 Regional Production Limit(thousand pounds) .002 1,308,000 90 Regional Penalty for exceeding limit(mil$) 0 2500 1,000 Economies of Scale Constant 0 1 0.9 Selling Price(1000$) 14.5 61.3 20 Transportation Emissions Constant(pounds) 0.000139 .278 0.0001 Transportation Emissions Limit(thousand pounds) .002 1,308,000 30 Transportation Emissions Penalty(mil$) 0 4636 1,000 Variable Regional Environmental Penalty($mil/ unit) 0 0.268 0.1 Table 2 5 . Performance of the algorithm Number of Locations CPU Time (seconds) Algorithm Lingo 4 1.1807 52 5 1.2015 972 6 1.2126 15663 8 1.2329 >19200 16 1.4854 >19200 20 2.3705 >19200 45 3427.6 >19200
53 Figure 2 2 . The effect of gamma on network dispersion Figure 2 3 . The effect of on network dispersion 0 5 10 15 20 25 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Number of Plants Economies of Scale Constant( ) Number of Demand Locations=20 Elimit=100,000 lbs Elimit=300,000 lbs Elimit=400,000 lbs Elimit=600,000 lbs 0 5 10 15 20 25 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 Number of Plants Regional Production Environmental Constant Number of Demand Locations=20 Transportation Cost:High Transportation Cost:Low
54 Figure 2 4 . The effect of on transportation e missions 525 530 535 540 545 550 555 560 565 570 575 500 510 520 530 540 550 560 570 580 590 600 Transportation Emissions Released(thousand pounds) Transportation Emissions Limit(thousand pounds) G=$2500M G=$1000M
55 CHAPTER 3 RESOURCE ALLOCATION OF NONPROFITS: A CASE OF ANIMAL SHELTERS 3.1 Motivation In 2011, the number of charities and private organizations that is registered with the Internal Revenue Service exceeded 1.6 million(IndependentSector.gov). Weisbrod(1974, 1979) describes non profits as providers of public goods and explains the existence of non profit organizations as the res p onse to the excess de m and from g ov ernme n tal organizations that can only ser v e to the le v el of the median voter . This excess demand is social responsibility that is an external cost to the government or the people. The services nonprofits provide range from protection of the environment to bringing performing arts to the community to supplying shelter, protection a nd resources to humans or animals. In our paper, we choose to focus on animal shelters as the stray animal population and euthanization is a significant problem. According to ASPCA, 5 7 million animals enter shelters each year and almost 50% of them are e uthanized. The trap and euthanize program intuitively may seem as an economically feasible solution to the problem but socially it is not acceptable. Moreover, Oxford Lafayette Humane Society approximates the total number of stray dogs and cats in the US t o be 70 million. To the being of the community may be seen as miniscule. In their report, Fitzgerald and Wilkinson mental health and wellbeing, as well as their negative impacts at the regional and national level such as tourism, distribution of costs and benefits, and damages to indigenous cultures. In several communities, outdoor cats are known to be the suspect
56 in many deaths of endangered birds (Cats vs Rare Bird, Wayne Parry). In Cape May, home of the Annual World Series of Birding and one of the prime bird watching spots in North America, where bird watching brings $2 billion to New Jersey economy every year, the cost of feral cats can be estimated to b e very high. One of the many challenges non profit organizations including animal shelters face today is the optimal allocation of resources and obtaining sufficient monetary do nations to continue their services. Non profit organizations, can use their efforts to receive funds from donations, grants and advertisements. They need to allocate their resources, like manpower and money, efficiently to organize fundraising activities, qualify for grants, set pricing policies and invest in advertisement to improve their services and funds. However, the return on these investments are highly stochastic causing an unstable source of funds. Among the nonprofit organizations, some are more overlooked than others. An example is the Best Friends Animal Society, the largest sanctuary for abused and abandoned animals in the US. with four stars or a score of 61.1.1 out of 70 at Charity Navigator, received 23.48% of the contributions the World W ildlife Fund, a leading wildlife conservation organization with three stars or 55.63 out of 70, received in 2012 (Charitynavigator.org). This difference in donations could be explained by the priority of the problems they address, however, a difference in funding between the same types of organizations also exist. For example: Greenville Humane Society, highest ranked animal shelter in 2012, received only 0.025% of the contributions Best Friends Animal Society has received. Sarsted and Schloderer(2010) bel ieve that the reputation of a nonprofit organization can be considered as an intangible asset, and they show that
57 universal ranking/rating system does not exist, many donors utilize the rankings of organizations such as Charity Navigator or Guidestar. Charity Navigator currently uses two types of metrics to evaluate non profits: financial health, and accountability and transparency. This ranking system quantifies certa in characteristics of the organization adequately, but it is not able to explain the differences in funding due to reasons related to alignment to mission. The significant effect of reputation on willingness to donate has been shown, alas a common reputati on metric capturing all aspects of an organization does not exist for non profits. In our paper, we try to capture the effect of reputation, fundraising activities and advertisements on the donations the animal shelter receives. We incorporate several fact ors known to be effective on increasing donations to appropriately represent the reputation of an animal shelter. In addition to donations from advertisement or fundraising activities, animal shelters also receive funds from adoption fees to cover their ex penses or to invest in growth. In addition to obtaining sufficient funding for the existence and growth of the organization, non profit organizations also need to stay true to their mission, and show the alignment between their program expenses and their goal. Sawhill and Williamson economically and that we need other metrics to quantify an organizations success in its mission. Unfortunately, unlike the financial measures, there are no common performance measures change depending on the organization, however Sawhill and Williamson(2001) were able to provide guidelines to find these performance metrics by
58 introduced by The Nature Conservancy contains three types of measures for non profit organizations: (i) impact measures: measure progress toward mission, (ii) act ivity measures: measure progress toward programs that help achieve the mission (iii) capacity measures: measure toward the necessary requirements for the non profit to exist. In the case of The Nature Conservancy, the impact measures are the biodiversity h ealth and threat abatement, the activity measures are the projects launched and sites protected, and the capacity measures are the public and private funding, total membership and market share. We identify impact measures related to animal shelters by anal yzing the mission statements and the goals of several animal shelters. Perhaps the most significant problem with animal shelters is euthanization. In an effort to reduce needless program in 2014 (Sorentrue, 2014). We propose the mean euthanization rate metric to quantitatively capture the performance of shelters in this impact measure. The euthanization problem would be eliminated if there was enough demand to adopt all homeless animals. At the Fort Worth Animal Shelter, in the last years, the goal has been increasing the live release rate or the number of animals not euthanized. We capture this measure through the mean demand rate and the mean adoption rate metrics. The animals who are neither adopted nor currently in shelter or euthanized are turned away from animal shelters creating a stray population. ACT clinic, an ur paper, we use the mean rejection rate metric to
59 measure the success of this mission. Another measure concerning any nonprofit ion statement (cityofboston.gov). We capture this goal through the traffic intensity metric. Finally, for any animal that is not adopted, rejected or euthanized, the amount of time spent in a shelter has a significant effect on its wellbeing. Clermont Cou get animals ... adopted out, as quickly as possible, so we have more space for the next In this paper, we identify two types of animal shelters: (i) traditional and (ii) adoption guarantee. Traditional shelters are shelters that accept and perform euthanization due to behavioral or health problems, space issues(i.e. Animal Control). Adoption guarantee shelters do not perform eu thanization and are also identified as no kill shelters. We first compare these shelters in the aforementioned impact measures. We find that traditional shelters perform better in most of these measures except for the mean euthanization metric. Moreover, w e provide optimal resource allocation and adoption fee policies that maximize the performance of animal shelters. We utilize the idea of social and economical performance measurements from Sawhill and Williamson(2001) to simultaneously maximize the impact measures and the funding of animal shelters. (For the rest of the paper we will refer to investments as activity measures and expected return on investments and the amount leftover from adoption fees minus the operational costs as capacity measures.) We f ormulate 6 different optimization problems to provide animal shelters with the most suitable objective for their organization.
60 In particular, we address the following research questions: 1. What are the impact, activity and capacity measures for animal shel ters? 2. How do different organizations/shelters compare in their impact measures? 3. How should these organizations allocate their resources and set their adoption fees to maximize their performance? 4. In what circumstances a negative adoption fee is optimal? 5. Is there an objective function that performs better than others? This paper is organized as follows: In Section 3.2 , we provide a brief literature review on research on non profit organizations. In Section 3. 3, we first propose quantitative metrics to represe nt the impact measures of animal shelters. We then compare two types of animal shelters for impact measures: traditional shelters and adoption guarantee shelters ( Section 3. 4). We show through comparison that traditional shelters perform better in most imp act metrics except for the mean euthanization rate. Another important subject concerning nonprofit organizations is the allocation of resources to achieve the organizations mission. In Section 3. 5, we provide optimal resource allocation and fee policies th at maximize the performance metrics of an organization. 3.2 Literature Review Most of the literature on non profit organizations has focused on comparison between non profit and for profit organizations. The profit maximization problem has been studied ex tensively for for profit firms/organizations. In contrast, profit maximization is not a suitable objective for non profit firms. The objective of the non profit organizations has been assumed to be the maximization of quality or quantity of services render ed in detail for organizations with tangible services such as hospitals,
61 universities and performing arts.(Newhouse 1970, Feldstein 1971, James and Neuberger 1981). Lee (1971) proposes a model that maximizes the organizations use of inputs. Another common objective is the maximization of the budget. (Tullock 1966, Niskanen 1971). Tullock and Niskanen also considered different types of nonprofits: purely donative and purely commercial. We assumed the revenue of animal shelters to be obtained through donation s as well as adoption fees. For a detailed literature review please refer to Hansmann (1980). In later studies, Verheyen (1998) explores the internal and external budget models of nonprofit organizations and the integration of managerial and professional decisions. On another topic, Lien et al.(2014), study the resource distribution operations of a nonprofit organization, and provide a heuristic for discrete resource demand distributions. In his paper, Kingma(1993) being aware of the similarities in uncer tainties between stock returns and returns on funds for non profit organizations, utilizes financial portfolio theory to minimize the financial risk of nonprofit organizations. He explains the portfolio managers seek to nonprofit sector seek to provide a certain level of services while minimizing nonprofit donations, and he tries to minimize the variance of net revenues given that there are no real changes in the expected funding from nongovernment sources, and the relat ive growth of any funding is the same from all sources. Similarly, Chabotar (1989),
62 Gronbjerg (1990, 1991a, 1991b), Chung and Tuckman (1991a, 1991b) study revenue predictability and revenue growth for nonprofit organizations. On another note, some sources believe that the objective of nonprofit defined as maximizing the net revenues vs keeping the expenses as low as possible (A WealthEngine White Paper). In this report, the authors also identify several factors that might affect the return on investment such as the type of organization, fundraising targets, size and wealth of the target donor audience, number and type of fundraising staff employed, and also the extent and foc us of the fundraising strategy. In Section 3.3 , we present our model: we utilize queuing theory to represent the impact metrics of animal shelters, financial portfolio theory to represent the expected return on donations, and general optimization to provi de the resource allocation policy that will maximize the impact metrics, the expected return from donations and fees collected from services less the operating costs of a shelter with k capacity. 3.3 The Model We identified the following impact metrics for any type of shelter: the mean demand rate, the mean waiting time, the mean rejection rate, traffic intensity and the mean adoption rate. These impact measures support the animal shelters in achieving their mission. In addition to these common impact me asures, we also propose a shelter specific impact metric for shelters with euthanizations (i.e. traditional shelters); the mean effective queuing theory to estimate these impac t measures and compare them for two different type of shelters: traditional and adoption guarantee shelters. (See Fomundan and Herman (2007)). The adoption guarantee shelters are euthanization free, thus are
63 represented by an M/G/k/k queue without reneging while the traditional shelters are represented by an M/G/k/k queue with reneging due to the euthanization policies. In an M/G/k/k queuing system, there is a single class of animal arrivals that have an exponential interarrival time distribution with a mea n arrival rate of and a generally distributed departure rate with mean rate of µ, where and µ represent the mean arrival rate for the population and the mean demand rates, respectively. There are servers in the system which represent the maximum c apacity for care at a shelter. In an M/G/k/k queue, there is no queue space, in the event of a new arrival, when all the primary care areas are full, the animals are turned away from the shelter (i.e. rejections). In traditional shelters, in addition to the rejections, an animal that has been in the shelter for more than a certain amount of time units is euthanized with probability p and kept at the shelter with probability 1 p. This waiting time is known to be rene ging time and it is generally distributed with mean . Figure 3 1 illustrates the rehoming process of adoption guarantee and traditional shelters. Let j represent the shelter type, where j is 1 for adoption guarantee shelters and 2 for traditional shelt ers. P k,j is defined as the probability that a shelter of type j is full (i.e. has k animals), and is dependent on arrival, demand and euthanization rates. If a shelter j is full (has k animals), the animals will be turned away from the shelter at a mean k,j ] and accepted to the shelter at a P k,j ]. The impact measures we identified can be represented by the following metrics: (i) mean demand rate (ii) mean waiting time, (iii) mean rejection rate, (iv) mean effective euthanizat ion rate, (v) traffic intensity/utilization, and (vi) mean adoption rate. The mean demand rate is the demand for animal adoptions. If the animal shelters did not have
64 capacity constraints or an abundant amount of animals were available, this measure will e qual to the mean adoption rate. However, the mean adoption rate is limited by the animals that are available at the shelter an d the capacity of the shelter. Table 3 1 generalizes notation we will be using in this paper. 3.3.1 Adoption Guarantee Animal She lter: M/G/k/k (No Bulk Arrivals, No Priorities) Type 1 or adoption guarantee shelters, are euthanization free shelters which are represented by an M/G/k/k queue. To facilitate the comparison between the two types of shelters, we assume that , and . M/G/k/k queues are a special case of M/G/c/k truncated queues and the performance measure calculations are well known (Gross 1985): ( 1 ) ( 2 ) The performance meas ures for M/ G/k/k queues are summarized in T able 3 2 . (For tractability reasons we represent functions with f (.) and separate collection of terms using [ ].) 3.3.2 Traditional Animal Shelter: M/G/k/k No Bulk Arrivals, No Priorities) Traditional animal shelters are represented by an M/G/k/k queue with reneging and retainment. In this case , the animals depart from a traditional shelter in two ways: (i) adoptions, (ii) euthanizations. The animals depart by an adoption with a rate of ,this corresponds to the animals that reneged but has been retained (not euthanized), and the animals that never reneged. For the rest, in p of the time, the animals that reneged are euthanized. The steady state equations can be written in a
65 similar way as type 1 shelters. The steady state equations for the M/G/k/k queuing system with reneging and retainment are shown in Table 3 3 (Pallabi, 2013) 3.4 Performance Comparison In Section 3.4 , we compare the traditional and adoption guarantee shelters in the aforemen tioned impact metrics. The first metric, the mean demand rate, represents the populations demand for adopting animals in a shelter. We assume this metric to be the same for any type of shelter. This means that each shelter have the same opportunity to per form well in terms of adopting out animals. We first present some observations and propositions to aid in comparing the different impact metrics. According to HSUS, 6 8 million animals enter shelters each year and about 50% of them are euthanized (humanesociety.org). The euthanizations are performed due to health, behavior problems and sometimes space issues. Despite the high number of euthanizations, it is easy to find an empty animal shelter at any time. In observation 1, we can show that the tra ditional animal shelters are more likely to be empty than the adoption guarantee shelters: Observation 1. The probability of a traditional shelter being empty is larger than or equal the probability of an adoption guarantee shelter being empty given that t he arrival and departure rates are the same. That is, . Proof: We know that . We can easily show that Where,
66 and From observation 1, we see that traditional shelters may be performing unnecessary space related euthanizations. On the contrary, adoption guarantee or no kill shelters are less likely to be empty and utilize their capacity more efficiently. Any animal tha t is not admitted to the shelter, not euthanized or placed in a home will be turned away from the shelter becoming strays and trying to survive on their own. It is essential for animal shelters to admit and adopt out as many animals as possible. To compare the mean rejection rate measure of adoption guarantee and traditional shelters, we use the property that the blocking probability of any type of shelter is decreasing and strictly convex as a function of the departure rate. Proposition 1. The blocking pr obability is strictly convex decreasing in departure rate , and capacity, k. Proof: This result is evident as Harel (1990) showed that is convex and decreasing in . Since , we can say . is also kno wn as the blocking probability, when multiplied with represents the rejection rate in our model. We can say that the mean rejection rate is at least as much as or higher in adoption guarantee shelters than in traditional shelters. It is easy to see tha t also holds. In proposition 1, we show that the probability of a shelter being full/ the probability of an animal being turned away from a shelter is smaller for traditional shelters than adoption guarantee shelters given that the arrival and mean demand rates
67 are the same. A smaller blocking probability and a smaller stray population is achieved at traditional shelters at the expense of euthanizations. We also see that increasing the departure rate or the capacity of a shelter will decrease the mean rejection rate at a shelter. From convexity, we can also say that the blocking probability decreases at a slower rate with increasing capacity or the mean demand rate. Any shelter with a goal to reduce the stray population, can achieve their miss ion by spay/neuter programs, increasing their capacity, and increasing the mean departure rate. However, the more the shelter increases the capacity or the mean demand rate or the mean euthanization rate, the smaller the changes in the mean rejection rate will be. Another concern related to the wellbeing of animals is identified as the behavioral of 10 cats who stay in shelters more than 90 days develop behavioral pro blems such as overstimulation/aggression or fearful/shyness (Parry, 2000). Next, we show that the waiting times at the two types of shelter are stochastically ordered. Proposition 2. Waiting time in a traditional shelter is stochastically smaller than or e qual to the waiting time in an adoption guarantee shelter ( ). Proof: Let denote the waiting time random variable in the adoption guarantee shelter. Let the distribution function of be . Then the waiting time random variable i n the traditional shelter random variable, say , is a mixture of the random variables Min( ,T) and , where T is the threshold random variable at which the animal may be euthanized with some probability, p. The distribution of , is
68 It is immediate that F( ) < F( ), hence This proves that, an animal is more likely to spend longer time in an adoption guarantee shelter than a traditional shelter. We also know that the mean waiting time at a traditional shelter is smaller than the mean waiting time at an adoption guarantee shelter. A shelter trying to improve its waiting time should try to increase the departures from the animal shelter. There are two ways of achieving this: by increasing ad options or by accepting euthanization although this is not socially preferable. Nonprofit organizations, specifically animal shelters must use their resources efficiently. One way to measure efficiency in queuing systems is traffic intensity/utilization w hich represents the percentage of the time the resources are busy. When it comes to efficiency, the utilization/traffic intensity of a traditional shelter is less than the traffic intensity/utilization of an adoption guarantee shelter if the capacity is be low a threshold. To show this, in Proposition 3, we first prove certain characteristics of the traffic intensity for an M/G/k/k queue. Proposition 3: Traffic intensity for any type of shelter, , for an M/G/k/k queue is semistrictly quasiconcave, fu rthermore it is decreasing as a function of for any Proof: See Appendix From Proposition 3, we know that the traffic intensity is decreasing as a function of for both type of shelters if th e capacity is below a threshold. Under this condition, the traffic intensity of the traditional shelters is smaller than the traffic intensity of
69 adoption guarantee shelters. The adoption guarantee shelters are better at utilizing their capacity when compa red with traditional shelters. The comparison of the adoption rate impact measure depends on propositions 1 3. The mean adoption rate of a shelter differs from the mean demand rate of a shelter. The mean demand rate represents how fast an animal is ado pted once it becomes available for adoption, whereas the mean adoption rate depends on the mean demand rate, the capacity of the shelter and the animal admittance rate. The larger the mean demand rate, the faster the animals can be rehomed. Similarly, the larger the animal shelter the more animals rehomed. The mean adoption rate of traditional shelters is less than the mean adoption rate of adoption guarantee shelters. While adoption guarantee shelters adopt out every animal they admit, traditional shelters adopt out only a percentage. The rest of the admitted animals at traditional shelters are euthanized. In T able 3 4 , we show a summary of the comparison of the common impact metrics. From Table 3 4 out guarantee shelter. In proposition 4, we show some characteristics of the mean effective euthanization rate. Please note that the mean euthanization rate represents the frequency that an individual animal is euthanized, whereas the mean effective euthaniz ation rate refers to the whole shelter.
70 furthermore it decreases with if . Proof: Mean effective euthanization rate, , is increasing with k and is decreasing with k and result follows from Proposition 2. Intuitively, the mean effective euthanization rate of an animal shelter should decrease with an increase in the capacity, k. However, an increase in capacity also decreases the mean rejection rate, and an animal shelter with a low mean rejection rate is more likely to accept animals with behavioral or health problems. This inc rease in admittances will also increase the mean effective euthanization rate. The same result holds for the effect of arrival rate on the mean effective euthanization rate. In order to reduce the mean effective euthanization rate, the animal shelters can decrease capacity, mean euthanization rate, arrival rate or increase the mean demand rate. Animal shelters employ several methods to reduce euthanization rates, increase adoptions or simply improve the efficiency of their operations. In addition to achievi ng these goals they need to manage their resources efficiently to continue providing services. 3.5 Resource Allocation for Adoption Guarantee Shelters In Section 3.5 , we introduce the activity and capacity measures, and we provide the optimal resource allocation policies for adoption guarantee animal shelters to invest in programs that maximize their performance metrics. Activity metrics measure the progress toward pr ograms that help achieve the mission. Animal shelters can invest in programs to assist the organization reach their mission and long term goals. They can
71 invest in programs that benefit the welfare of the animals by increasing the adoption rate or by inves ting in a wide range of fundraising activities that increase the monetary donations. The amount of resources an animal shelter dedicates to these programs represent the activity metrics. These activity metrics are limited by the resource capacity of the or ganization, thus a+d<= m . For simplicity, we will refer to adoption increasing programs as advertisements and monetary donation increasing programs as fundraising activit ies for the rest of the paper. The third set of metrics we introduce are the capacity metrics which measure toward the necessary requirements for the non profits to exist. In the case of the Nature Conservancy, the capacity measures were the public and private funding, total membership and market share. We identify the capacity measures for animal shelters to be the donations the organization receives. In our model, we use a similar approach to Kingma (1993) to represent the return on investment from different fundraising programs. Table 3 5. Table 3 6 list the terminology used in Section 3 .5 : The expected return from fundraising activities take the form: . is the risk free donations any shelter would receive. According to the 2010 Nonprofit Fundraising Survey, organizations with larger investments in fundraising activities saw higher increases in their donations in 2010. Hence, we assume that the expected return from fundraising activities is directly affected by the fundraising investment amount d. represents the difference between the mea n animal shelter reputation and the reputation of one specific animal shelter, and is how sensitive the expected return is to a 1% change in reputation. The donations available to animal shelters are limited and fairly competitive
72 Thus, an animal she , is decreasing with adoption fee rates, increasing with investments in advertisements and the size of the organization due to the high visibility effect (Weiss et al.(1999), Kotha et al.( 2001)). represent how much the reputation factor is affected by a change in adoption increasing investments, the size of the animal shelter and the adoption fee rates, respectively. In addition to t he return on investments, advertisements increase the mean demand rate and take the form In our resource allocation model, we maximize the performance metrics we defined. Given these measures, the optimization problem for the adoption gu arantee shelter can then be written as: s.t Where the first term is an impact measure. and represents the objective function including performance measure l for shelter type j . The second term is the expected return from advertisements or fundraising activities as monetary donations. The third term is the donations received through an imal adoption fees minus the cost of operating an animal shelter of capacity size k. before investments and average reputation of all shelters as . The activity metr ics are bounded by the resource limits of the organizations, and are incorporated into our model as a constraint. These activities increasing the mean demand rate or the
73 monetary donations, impact the return on investments and the impact metrics of the she lter. The monetary component of the objective function represents the capacity measures that ensure the organization continue to exist. 3.5.1 Mean Demand Rate In the mean demand rate problem, we simultaneously maximize the mean demand rate, the expected r eturn on investments and the donations from adoption minus the operating costs. We first introduce an important property of the blocking probability that aid in our analysis. We show that the blocking probability is jointly convex in a and f. Theorem 1: Th e blocking probability B or is jointly convex in a and f. B is strictly increasing convex in f, strictly decreasing convex in a and has decreasing differences in a and f . Proof: B is convex and decreasing in . We need to show that: B( [1 t]x+ty)) [1 t]B( (x))+tb(g(y)) Since is linear, we can write: B( (1 t)x+ty))= B((1 t) (x)+t y)) (1 t)B( (x))+tB( (y)) (1 t)B( (x))+tB( (y)) Please see Appendix B for the second result. In Table 3 7 we show the optimal solution to the adoption guarantee problem for different impact measures: To better explain the aforementioned results we perform further analysis: Observation 2 (Mean Demand Rate): The exact optimal adoption fee that maximizes t he mean demand rate problem is decreasing as a function of a, if the
74 marginal effect of a on the expected returns on investment (ROI) is larger than the marginal effect of d on ROI. Conversely, is increasing as a function of a if the marginal effect of a on both the mean demand rate and ROI is less than the marginal effect of d on ROI. Proof: In Appendix B, we see that . Thus, if , and if . In Observation 2, we see that if the marginal effect of a on ROI is larger than the marginal effect of d on ROI, the adoption fee is decreasing. An investment in adoption increasing programs increase both the repu tation of the nonprofit organization and the mean demand rate. Generally, an increase in demand will translate to an increase in prices as well. In this case, as the mean demand rate increases, the adoption fees decrease. This result can be explained by th e fact that three types of benefit is gained from an investment in a : (i) increase in mean demand rate, (ii) increase in expected return due to improvement in the reputation of the organization, and (iii) increase in the mean adoption rate. This can also b e explained with the non profit nature of the organization. If the organization can receive enough funds through donations, they do not need to charge a fee for their services. Conversely, if the marginal effect of a on the objective function from increas es in mean demand rate and ROI, is smaller than the marginal effect of d on the objective function, then adoption fee is increasing with a . In this scenario, as the organization
75 decides to invest more in a , the benefits on ROI decrease . In order to make up for these losses, the organization must increase the adoption fees. In summary, if the marginal benefit of investing in adoption increasing programs is higher than the marginal benefit of investing in fundraising, the organization will decrease their adoption fees as the objective of a nonprofit organization is not to make profit but to provide a service. Otherwise, the organization is already losing on ROI by investing in a and must increase adoption fees to cover up their expenses. Ob servation 3(Mean Demand Rate): The optimal adoption fee with approximation, that maximizes the mean demand rate problem is increasing as a function of k, if the marginal effect of a on the expected returns on investment (ROI) and the mean demand ra te is larger than the marginal effect of d on ROI. Proof: We know from Cardoso(2009) that , and it is easy to see that . Thus, if , and if . If the marginal benefit of advertisements is larger than the marginal benefit of fundraising, the optimal adoption fee increases as a function of the capacity, k. An animal shelter will increase adoption fees as a response to cover up expenses, and it will decrease fees as donations become sufficient enough to support the expenses. The monetary gain from a is not sufficient to cover up the expenses of a large shelter as the effect of a on donations is indirect (through increasing reputation). Hence, the
76 org anization must increase adoption fees. If the marginal benefit of d is larger than the marginal benefit of a , the optimal adoption fee decreases as a function of capacity. The fundraising activities have a more direct effect on the organizations donations but have no effect on the impact metrics. In this scenario, the organization has sufficient funds to support a large shelter, but it still has to lower adoption fees to increase the mean demand rate. This result is the same for the heavy traffic approximat ion. The capacity, k, has the opposite effect on the optimal resource allocation to advertisements with approximation , . is decreasing as a function of k, if the marginal effect of a on the objective function is larger than the marginal ef fect of fundraising and the adoption fee on the objective function( Advertisements and capacity both increase the reputation of the organization and the mean demand rate, if the marginal benefit of a is large, o ne unit increase in a will have a significant impact on the objective function especially with a large capacity. Hence, the organization can spare more resource to invest in fundraising. We also observe that is decreasing as a function of the init ial mean demand rate, enough that the effect of advertisements is insignificant. Observation 4(Mean Demand Rate): The capacity measures component of the optimal objective function valu e with approximation , , is larger than zero when . In this scenario, the optimal adoption fee with approximation , , can be set to a negative value.
77 Proof: when , where We see that when the risk free return on investments in resources is larger than a threshold, the animal shelters can provide incentives for adopters, ie veterinary or supply discounts given that an adoption screening process is in place. To evaluate the s cenarios when the adoption can be negative we perform some numerical experiments. We choose as a base case that satisfies the above conditions and we vary between 0 and 10. This graph represents the scenario where , hence the optimal adoption fee, , is decreasing as a function of a. In Figure 3 2, the shaded area represents the risk free return rate that allows the organization to provide incentives for adopters. We see that as is increasing, the optimal adoption fee and the o ptimal allocation to advertisements is decreasing. In this scenario, a small increase in advertisements will lead to higher benefit on ROI. The organization can invest less and still receive high return on advertisements. Observation 5 (Mean Demand Rate): The objective function for the mean demand rate problem, , is increasing as a function of k when , and decreasing as a function of k, otherwise. Proof: If we remove the integrality condition of k, we can write:
78 if where since (Cardoso 2009). If the cost of maintaining a unit of shelter capacity, is smaller than a threshold, ,an increase in the capacity of the animal shelter will improve the objective function. the operating cost. 3.5.2 Mean Waiting Time F or the mean waiting time problem, we cannot obtain exact solutions, thus we explore the heavy traffic approximation solutions. The possible program investments, a and d are competing for resources, consequently the adoption fee is decreasing as a function of d while it is increasing as a function of a. As the organization allocates more resources to advertisements, it does not need to decrease the adoption fee to improve reputation, the waiting time or the mean adoption rate. Conversely, if the organizatio n allocates more resources to fundraising, it needs to decrease the adoption fees to reduce the waiting time and increase the mean adoption rate. The discrepancy between the optimal adoption fees for the mean demand rate problem and the mean waiting time p roblem is caused by the rate of change differences between the two impact metrics. We see that, the rate of change in the mean demand rate is larger than the rate of change in the mean waiting time with respect to a, assuming While the optimal adoption fee for the mean demand rate problem is not affected by , the optimal solution for the mean waiting time problem is affected by the initial mean demand rate. , is increasing as a function of the initia l mean demand rate, .
79 This difference between the mean demand rate problem solution and the mean waiting time problem solution is explained by the fact that the rate of change for the mean waiting time is affected by the initial mean demand rate while t he rate of change for the mean demand rate is not. By reducing the adoption fee, an animal shelter can improve the demographic it reaches regardless of its initial mean demand rate, causing an increase in the mean adoption rate. This increase in the mean adoption rate will decrease the mean waiting time, however the change in the mean waiting time will still depend on the initial demand rate. Observation 6 (Mean Waiting Time):The optimal adoption fee with approximation for the mean waiting time problem, is increasing as a function of k, if and decreasing if . Proof: The optimal adoption fee with approximation that maximizes the mean demand rate problem is increasing as a function of k, if the mar ginal effect of a on the expected returns on investment (ROI) is smaller than the marginal effect of d on ROI. Proof: We know from Cardoso(2009) that , and it is easy to see that . Thus, if , and if . For the mean waiting time problem the optimal adoption fee with approximation is increasing as a function of k when the marginal benefit of f and d exceed the marginal benefit of a.
80 Similar to observation 5, the objective function, , is increasing as a function of k when , and decreasing as a function of k, otherwise. This threshold for the mean demand rate problem and the mean waiting time problem are the same since neither of the impact measures are a function of k. 3.5.3 Mean Rejection Rate and Mean Adoption Rate We see that the optimal adoption fee solution to mean rejection or mean adoption rate problem shows similar characteristics to the mean demand rate case. The optimal adoption fee is decreasing as a function of a when the marginal benefit of a is larger tha n the marginal benefit of d. Observation 7 (Mean Rejection and Mean Adoption Rate): The exact optimal adoption fee that maximizes the mean adoption (rejection) rate problem is decreasing as a function of a, if the marginal effect of a on the expecte d returns on investment (ROI) and the mean adoption (rejection) rate is larger than the marginal effect of d on ROI. Conversely, is increasing as a function of a if the marginal effect of a ROI is less than the marginal effect of d on ROI. Proof: In Appendix B, we see that . Thus, if , and if . Observation 8(Mean Rejection and Mean Adoption Rate): The exact optimal adoption fee that maximizes the mean adoption (rejection) rate problem is
81 increasing as a function of k, if the marginal effect of a on the expected returns on investment (ROI) is larger than the marginal effect of d on ROI. Proof: We know from Cardoso(2009) that , and it is easy to see that . Thus, if , and if . Observations 7 and 8 are similar to Observations 2 and 3 since the mean adoption rate impact measure and the mean demand rate impact measure are closely related. An improvement in the mean demand rate also improv es the mean adoption rate and the mean rejection rate. The difference between the optimal adoption fees with approximation for the mean adoption rate problem and the mean demand rate problem is . This difference is due to the fact that the mean demand rate problem represents the demand rate for any animal, while the mean adoption rate depends on the capacity of the shelter, k. The optimal resource allocation to advertisements with approximation that maximize the mean adoption rate or the mean rejection rate problem shows similar characteristics to the mean demand rate problem. is decreasing as a function of k when the marginal benefit of a exceed the marginal benefit of d. We see that . If the organization invests one unit in advertisement or decreases the adoption fee by one unit, the mean demand rate will improve by one unit, whereas the mean adoption rate will improve by k units. Therefore, the optimal ado ption fee with
82 approximation and the optimal resource allocation to advertisement with approximation are larger for the mean demand rate problem. Observation 9 (Mean Adoption (Rejection) Rate): The capacity measures component of the optimal objective fun ction value with approximation , , is larger than zero when . In this scenario, the optimal adoption fee with approximation , , can be set to a negative value. Proof: The capacity measures component of the objective function value is when , where In observation 9, we demonstrate the risk free return threshold that will allow the animal shelter to provide incentives for adopters and still receive sufficient funding. Figure 3 3 illustrates the behavior of the optimal adoption fee with approximation , , and the optimal resource allocation to advertisements with approximation , when the marginal benefit of fundraising on ROI is dominant. We see that, , is decreasing as a function of k, and is increasing as a function of k. In order for the animal shelter to provide incentives for the adopters and still have sufficient monetary donations, the risk free return rate should in the shaded range or larger. Observation 10 (Mean Adoption/Rejection Rate): The objective function for the mean adoption/rejection rate problem, , is increasing as a function of k when , and decreasing as a function of k, otherwise.
83 Proof: If we remove the integrality condition of k, we can write: if where since (Cardoso 2009). Observation 10 shows the threshold per unit capacity operating cost, . The difference between the threshold per unit capacity cost for the mean demand rate problem and the mean adoption (rejection) rate problem is: and it is the res ult of the effect of k on the mean adoption (rejection) rate impact measure. A unit increase in k will increase the mean adoption rate, and consequently the objective function value, while the mean demand rate measure is not affected by k. This extra incre ase in the objective function from the mean adoption rate translates to a lower per unit operating capacity threshold. 3.5.4 Traffic Intensity The optimal adoption fee for the traffic intensity/utilization problem behaves similar to the previous solutions . The difference between the optimal adoption fees of the mean demand rate problem and the traffic intensity problem is: . The traffic intensity represents the average occupancy of an animal shelter. The optimal adoption fee with approximation for the traffic intensity problem is larger and the difference is increasing with k.
84 Observation 11 (Traffic Intensity): The capacity measures component of the optimal objective function value, , is larger than zero when . In this scenario, the adoption fee, , can be set to a negative value. Proof: The capacity measures component of the objective function value is when , where In Figure 3 4, we see the behavior of the optimal solution with approximation to the traffic intensity problem for the dominant case. The optimal adoption fee with approximation is negative and increasing as a function of k, and the optimal resource allocation to advertisements with approximation is decreasing as a function of k. The risk free return rate threshold is decreasing as a functio n of k. In this specific scenario, as the capacity becomes larger, the objective function increases. In Observation 10, we explore the scenarios where the objective function increases as a function of capacity. Observation 12 (Traffic Intensity): The obj ective function for the traffic intensity problem, , is increasing as a function of k when , and decreasing as a function of k, otherwise. Proof: If we remove the integrality condition of k, we can write:
85 if where since (Cardoso 2009). The difference between the threshold per unit capacity cost for the mean demand rate problem and the mean adoption (rejection) rate problem is: and it is the result of the effect of k on the traffic intensity impact measure. A unit increase in k will decrease the traffic intensity if , and consequently the objective function value, while the mean demand ra te measure is not affected by k. This extra decrease in the objective function from the mean adoption rate translates to a larger per unit operating capacity threshold for traffic intensity. On the contrary, when , . 3.6 Resource Allocation for Traditional Shelters The resource allocation problem for the traditional shelter case is similar to the adoption guarantee shelter case: s.t In the traditional shelter case, the mean demand rate also includes the mean euthanization rate. The performance measures are obtained from Section 3.4. Due to the complexities of the impact measures the results are implicit (Table 3 8).
86 We see from Table 3 8 that the optimal adoption fee formula is identical for all of the impact measures. However, this term also includes the optimal resource alloc ation to advertisements. The optimal resource allocation to advertisements and donations can be calculated using the expressions. The optimal adoption with heavy traffic approximation for traditional animal shelters is inversely influenced by th e effect of the adoption fee on the mean demand rate. As the effect of the adoption fee on the demand rate increases the organization can reduce its adoption fees. also decreases as function of the third objective function weight, , and the capa city, k. As the animal shelter puts less importance on covering its expenses, the organization can reduce its fees and improve its impact measures. The effect of the adoption fee, and the ratio of the effect of fundraising on the donations to marginal effe ct of advertisements on the mean demand rate increase the adoption fee. On the contrary, the ratio of the effect of advertisements on the donations to the effect of advertisements on the mean demand rate decreases the adoption fee. Observation 13 (Tradit ional Shelter): The objective function for the mean euthanization and mean rejection rate problem, , is increasing as a function of k when , and decreasing as a function of k, otherwise. Proof: If we remove the integrality condition of k, we can write:
87 if where since (Cardoso 2009). The difference between the per unit capacity cost threshold for the mean demand rate problem and the mean effective euthanization problem is: if since . This difference is due to the reduced mean adoption rate from euthanizations and the effect of k on the mean effective euthanization rate. The per unit capacity cost threshold is larger for the mean demand rate problem if the effect of an increase in k o n the mean demand rate is larger than the effect of an increase in k on the objective function of the mean effective euthanization problem. 3.7 Conclu ding Remarks In this paper we identify the optimal adoption fees, and the optimal resource allocation to advertisements and monetary donation increasing programs that maximize the impact metrics, the expected return on investment and the donations leftover after operating expenses are paid. We find that, in general, the optimal adoption fee is decreasing as a function of a , if the marginal benefit of advertisements is larger than the marginal benefit of funding. In this scenario, the organization is receiving enough funding as monetary donations by improving its reputation that it can now decrease the adoptio n fees. The optimal adoption fee is increasing as a function of a, if the marginal benefit of advertisements is high. An animal shelter will increase adoption fees as a response to cover up expenses, and it will decrease fees as donations become
88 sufficient enough to support the expenses. The monetary gain from a is not sufficient to cover up the expenses of a large shelter as the effect of a on donations is indirect (through increasing reputation). Hence, the organization must increase adoption fees. In add ition to the dynamics of the optimal adoption fee as a function of a and k, we also identify scenarios where the animal shelter may provide incentives for adopters. The optimal resource allocation to advertisement with approximation is decreasing as a fun ction of k. Advertisements and capacity both increase the reputation of the organization and the mean demand rate, if the marginal benefit of a is large, one unit increase in a will have a significant impact on the objective function especially with a larg e capacity. Hence, the organization can spare more resources to invest in fundraising. We also observe that is decreasing as a function of the initial mean demand rate, enough that the effect of advertisements is insignificant. The dynamics of the optimal adoption fees are the same for each impact metric, however the actual optimal adoption fee expressions are different for each. The optimal adoption fee for the mean demand rate pro blem differs from the optimal adoption fee for the mean waiting time problem due to the rates of changes in the optimal values with an increase in a . The optimal adoption fee solutions for the mean demand rate problem and the mean adoption (rejection) rate problem differ because the mean demand rate measures the departure rate of a single animal from a shelter, while the mean adoption rate considers the capacity of the shelter as well. We see that . If the organization invests one unit in advertisement or decreases the adoption fee by one unit, the mean demand rate will improve by one unit, whereas the
89 mean adoption rate will improve by k units. Therefore, the optimal adoption fee with approximation and the optimal resource all ocation to advertisement with approximation is larger for the mean demand rate problem. The difference between the threshold per unit capacity cost for the mean demand rate problem and the mean adoption (rejection) rate problem is: an d it is the result of the effect of k on the mean adoption (rejection) rate impact measure. A unit increase in k will increase the mean adoption rate, and consequently the objective function value, while the mean demand rate measure is not affected by k . T his extra increase in the objective function from the mean adoption rate translates to a lower per unit operating capacity threshold. The difference between the optimal adoption fees of the mean demand rate problem and the traffic intensity problem is: . The traffic intensity represents the average occupancy of an animal shelter. The optimal adoption fee with approximation for the traffic intensity problem is larger and the difference is increasing with k.The difference between t he per unit capacity cost threshold for the mean demand rate problem and the mean adoption (rejection) rate problem is: and it is the result of the effect of k on the traffic intensity impact measure. A unit increase in k will decrease the traffic intensity if , and consequently the objective function value, while the mean demand rate measure is not affected by k . This extra decrease in the objective function from the m ean adoption rate translates to a larger per unit operating capacity threshold for traffic intensity. On the contrary, when , .
90 We also evaluated the objective function at their optimal values with approxima tion to identify the best performing objective. We find that when the marginal benefit of a exceeds the marginal benefit of f and d together, the traffic intensity problem gives the highest optimal objective function value. Otherwise, the order of the optimal objective function values with approximation effect on the mean demand rate and the effect of all variables on ROI.
91 Table 3 1. Notation Symbol Meaning Mean arrival rate of animals at a shelter Mean adoption rate of animals at shelter type j Capacity of shelter type j Mean reneging(euthanization) rate of animals at traditional shelters Probability that there are n animals in shelter type j Mean waiting time of an animal in shelter type j Mean rejection rate of an animals in shelter type j Mean number of euthanizations in traditional shelters The utilization of a shelter type j Mean number of adoptions at shelter type j Maximum allowed number for mean number of euthanizations Figure 3 1. Rehoming process at adoption guarantee and traditional shelters Table 3 2. Performance measures for adoption guarantee shelters Impact Measure Formula Mean Demand Rate Mean Waiting Time Mean Rejection Rate: Traffic Intensity: Mean Adoption Rate: Primary Care Area Primary Care Area Primary Care Area Queue Size=0 Primary Care Area Primary Care Area Primary Care Area Queue Size=0
92 Table 3 3. Performance measures for traditional shelter Table 3 4 . Comparison of Impact Metrics Mean Demand Rate Mean Waiting Time Mean Rejection Rate Traffic Intensity 1 Mean Adoption Rate Table 3 5 . Summary of decision variables Decision Variables a The percentage of the resources allocated for programs increasing mean demand rate d The percentage of the resources allocated for fundraising programs f Adoption fee per animal per time Table 3 6 . Summary of parameters Parameters Fundraising reputation factor for the animal shelter Mean fundraising reputation factor for all animal shelters 1 This inequality holds when the condition satisfying Proposition 3 holds. Impact Measure Formula Mean Demand Rate: Mean Waiting time: Mean Rejection Rate: Mean Effective Euthanization Rate: Traffic Intensity: Mean Adoption Rate:
93 Sensitivity of the returns to the reputation factor Risk free return on investment for any shelter The effect of programs increasing adoptions(a), adoption fees(f), and shelter size(k) on reputation v The effect of percentage of resources allocated to programs increasing monetary donations on expected return on investments Per unit cost of keeping up a shelter The weight of the performance metric t, t
94 Table 3 7. Summary of adoption guarantee result Range Advertisements (a) Mean Demand Rate 0 m Mean Waiting Time 0 m Mean Rejection/Adoption Rate 0 m Traffic Intensity 0 m
95 Table 3 7. Continued Table 3 8. Traditional animal shelter r esults Impact Measure f Mean Demand Rate Mean Waiting Time Mean Rejection Plus Euthanization Rate Traffic Intensity Mean Adoption Rate Fundraising (d) Adoption Fee Rate(f) Mean Demand Rate m 0 Mean Waiting Time m 0 Mean Rejection/Adoption Rate m 0 Traffic Intensity m 0 N/A
96 Figure 3 2. The mean demand rate problem s olution Figure 3 3. The mean adoption rate problem s olution ( is dominant)
97 Figure 3 4. The mean adoption rate problem solution ( is dominant) Figure 3 5. The optimal with approximation objective function comparis on
98 CHAPTER 4 RESOURCE ALLOCATION OF ANIMAL SHELTERS WITH CAPACITY EXPANSION 4.1 Motivation The biggest difference between non profit organizations and for profit organizations is ownership. For profit organizations can distribute their wealth among the shareholders, whereas nonprofit organizations can only use the surplus from their activities to provide services, self preserve and grow. In the case of non profit organizations that provide education, healthcare or care, growth usually requires investing in more equipment, workforce or both. Animal shelters can provide care for more animals by ad ding primary care areas and care givers. Capacity expansion in an animal shelter is also a viable solution for overcrowded shelters in many situations. In order to increase capacity, some of the limited resources must be allocated to adding more capacity. In Chapter 4, we incorporate capacity expansion decisions to the resource allocation problem. In this scenario, the resources of the organization must be divided between advertisements, fundraising activities and the capacity expansion. The model we intr oduce is a non linear mixed integer problem and known to be NP complete. We perform numerical experiments to understand the dynamics between changes in alternate resource allocation plans. We collect data from several resources to estimate the parameters a llowing us to perform numerical experiments with realistic data. We explain the estimations of the parameters in detail in Section 4.2 . The key research questions that we address in Chapter 4 are: 1. What are realistic data for the animal shelter setting? 2. Giv en the realistic data, what are appropriate resource allocation schemes for alternate shelters?
99 3. When should an animal shelter consider adding more capacity? 4. Under what circumstances should the animal shelter undertake large fundraising efforts? 5. When is adv ertising the most appropriate mechanism to aid in the adoption process? 6. Should adoption guarantee shelters utilize a different resource allocation strategy than the traditional shelters? 4.2 Realistic Data We utilize realistic data obtained from Asilomar A ccords, Form 990s , shelter websites and Charity Navigator to estimate the parameters of our model. Asilomar Accords include information about yearly intake, adoptions, euthanizations as well as the number of healthy and unhealthy animals and transfer to an d from other organizations. The information available provides a way to track shelter statistics and live release rates. Specifically, we use the beginning and ending shelter counts , total intake, adoptions and total euthanasia. The beginning count is the number of animals that are in the shelter or in foster care at the beginning of a reporting period. Similarly, the ending shelter count is the number of animals at the shelter or at foster care at the end of a reporting period. Intake is the number of live animals received at the shelter from the public, incoming transfers from within and outside the coalition, and from guardians requesting euthanasia. Adoptions are the number of animals that the organization rehomed within the public . Total euthanasia fiel d shows the number of animals that are euthanized regardless of their health or behavioral status. In addition to the information from Asilomar Accords, we collect the financial information from Form 990s. Form 990s are annual reports that non profit organ izations must file with the IRS that include information about the mission of the organization, its finances and programs. We specifically collected the contributions, gifts and grants,
100 program service revenue, advertising and promotion expenses, fundraisi ng income and expenses and operating expenses. Contributions, gifts and grants are the donations the organization receives. This number is separate from the fundraising income. Program service revenue is only applicable to organizations that charge a fee f or their services (i.e hospitals, schools, animal shelters). Advertising and promotion expenses include all printed and electronic media advertisements, internet site link costs and independent contractor fees for advertisements. We also collected the sco res and rankings from Charity Navigator to represent reputation in our model. The adoption fees were collected from individual shelter websites and the maximum adoption fee was used in our calculations . In Table 4 1 , we see the base case and the ranges o f the parameters we identified. To estimate the arrival, adoption and euthanization rates, and the capacity of the shelters we use Asilomar Accord records for 2011. We collected the information from 159 and 149 animal shelters for dogs and cats, respective ly. The arrival, departure and euthanization rates are taken from Total Intake, Adoptions and Total Euthanasia fields on the Annual Animal Statistics reports . The capacity represents the maximum amount of animals that can be kept at the shelter at any time and it is calculated as the maximum of the beginning and ending shelter count in 2011. The reputation of any organization is intangible. Charity Navigator, a non profit organization that provides aid to donors, has a scoring and rating system that allows the donors to compare different charities. Charity Navigator uses three dimensions to ra te non profit organizations: (i) financial measure, (ii) accountability and transparency and (iii) results reporting. We use these ratings/scores to estimate the reputation of the
101 organization. Among the 1.6 million charities, Charity Navigator rates appro ximately 7500 charities, and 24 charities that are listed in Table D 1 represent the animal shelters that participate in Asilomar Accords program and also rated by Charity Navigator. We use the Charity Navigator score of an animal shelter to represent the reputation of a firm, and the average reputation is calculated by taking the mean of the scores of the all the animal shelters in our data. We collected financial information from Form 990s of the 24 aforementioned charities. To calculate , t he effect of programs increasing monetary donations on expected return on invest fundraising expenses. The budget parameter, m, is directly taken from the contributions , gifts and grants field. While mo st of the estimate s were readily available , the estimates for the effect of advertisements, adoption fees , and capacity on the expected donations and the mean demand rate warrant further analysis. We performed two different multiple regression analyses to estimate these parameters. In the first regression analysis, we tried to find the effect of advertisements, adoption fees, capacity and reputation difference on the expected donations ( ) . is the reputation difference and it is the . The R square and the adjusted R square values for this analysis were 0.72 and .63. (Refer to Appen dix D for complete analysis.) In the second regression analysis, we estimate the effect of advertisements and adoption fees on the mean adoption rate (G,L) . To complete this analysis, we utilize the adoption rate and financial information for 2010 and 201 1. We also collected the
102 adoption fee information from the shelter websites and used the maximum adoption fee reported. The number of animal shelters that have both Asilomar Accords and Form 990s for both 2010 and 2011 were low, hence the R squared and adj usted R squared values were low. We still use the coefficients as a starting point for G and L in our numerical analysis. 4.3 Model Description 4.3.1 Model for Adoption Guarantee Shelters The resource allocation problem including the capacity as a variable for adoption guarantee shelter i s: s.t The objective function is similar to the resource allocation problem descri bed in Section 3.5. In all k parameters are replaced with , where is the capacity expansion size variable and is the initial capacity parameter. The first constraint is the resource constraint indicating that the organization can invest its resources on either advertisements, , fundraising activities, and capacity expansion, at a per unit cost of . The second constraint limits the capacity expansion at an animal shelter due to space limitations. The third constraint limits the adoption fees. The organization can set negative adoption fees, providing incentives for adoptees up to .
103 The first term of the objective function represents the impact measures that were defined in Section 3.5. The mean demand rate, , and the mean waiting time, 1/( are not affected by the capacity measure . The mean rejec tion and mean adoptio n rates, , , are increasing con cave with respect to , as is known to be convex with (Cardoso 2009). Traffic intensity , is also decreasing and convex The second term of the objective fun ction represents the donations the organizations receives. The only term involving in this component is and is linear. The third term of the objective function is the revenue from adoption fees less the operating cost. We know that is concave in and v is linear with respect to 184.108.40.206 Numerical Experiments for Adoption Guarantee Shelters In the base case scenario, the optimal solution is to invest all the resources to advertisements and set the adoption fee to $600 for all impact measure problems . During our experiments we observed similar trends for different impact measures. In general, we see that changing the risk free donation rate, , the weight of the impact measure, , per unit cost of capacity e xpansion, c, the initial reputation or the average reputation, or the initial mean demand rate , , have no effect on the optimal solution (i.e. the resource allocation plan), but they directly affect the objective function value. The risk free donation rate is the donations any shelter will receive regardless of its reputation. This result is consistent with the results in Section 3.5. The risk free donation rate only increases the total funding available to the organization but it has no e ffect on how the resources are allocated. The costs associated with capacity are the
104 per unit expansion cost and the per unit operating cost. For the base case scenario, per unit operating cost associated with maintaining the shelters is much larger than the per unit expansion cost, making changes in the per unit expansion cost insignificant to the optimal solutions as the objective of the organization is to improve the mea n demand rate as much as possible with the available resources. 220.127.116.11 We varied the values of the mean arrival rate between 1000 and 3000. The results show that the changes in the mean arrival rate does not change the optimal resour ce allocation structure. However, we see a threshold where the optimal adoption fee shifts from the lower bound ( $200) to the upper bound ($600). When arrival rate is high, so is the adoption rate for adoption guarantee shelters. In the low arrival rate c ase, the donations are sufficient enough to cover the expenses and provide incentives for adoptees, whereas for the high arrival rate donations are not enough. 18.104.22.168 The Effect of Adoption Fees, Capacity, and Advertisements on the Reputation ( , ) We range the values of between 0 and 12, and see that when the marginal benefit of advertisements is lower than the marginal benefit of fundraising, the animal shelter allocates all resources to fundraising activities (Figure 4 1). Once investing in advertisements become more v aluable, i.e higher marginal benefit, the organization allocates all resources to advertisements. The changes in has no effect on the capacity or the adoption fees as an increase in donations has no effect on the impact or off between the resource allocation in advertising or fund raising. This may have future
105 implications for the organization, as these two alternate activities may require different sets of organizational competenci es to implement effectively. For the range of values we utilized in the numerical experiment, the eff ect of adoption fees on the reputation, had no influence on the optimal solutions. This can be explained by the difference in magnitudes of the possible values of a and f . We changed the values of between 0 and 10, and we observed a threshold w here the animal shelter expands the capacity of the shelter to the upper bound. As a result of the amount spent on capacity expansion, the resource allocation to advertisements decreases. We also observe that the adoption fee is not affected by . There fore, it appears that when the influence of capacity on reputation increases, then capacity expansion becomes a more viable option. In this situation, not only are the operational benefits of increasing capacity important, but also the reputational effect s as well. 22.214.171.124 The Effect of Reputation and Fundraising on Donations ( ) In Figure 4 2 , we see how the optimal solution and the optimal objective function values change with the effect of fundraising on donations for the mean demand rate impa ct measure ( . We observe a similar trend with all impact measures with the exact solutions slightly different from each other (Please refer to Appendix D for complete numerical results.). As the effect of fundraising on donations increase s , the optimal resource allocation to advertisements decrease s and the optimal allocation to fundraising increases. Specifically, the organization invests all resources either in advertisements or fundraising activities. For the ranges we have studied, the optimal adop tion fee is first decreasing until a threshold and increasing thereafter for the mean
106 waiting time and mean rejection rate impact measures. The threshold is the point where the marginal benefit of advertisements and marginal benefit of fundraising equal ea ch other . For the mean demand rate and the traffic intensity , the adoption fee slowly decreases with . As the effect of fundraising on donations, , increase, the animal shelter gains more from a single unit of resource investment in fundraisi ng , all owing the organization to obtain most of its funds from donations and reduce its adoption fees. We ranged the values for between 0 and 60,000. An increase in translates to an increase in the effects of advertisements, adoption fee, capacity and the relative reputation on the donations. As we increase the effect of reputation from the donations, the animal shelter first allocates all resources to fundraising activities up to a threshold. This threshold is determined by the point where the effects of reputation and fundraising activities on the donations equal each other. Beyond this threshold, the reputation is dominant over fundraising activities for raising donations, thus the animal shelter will invest all resources to advertisements. 126.96.36.199 O bjective Function Weights ( ) The weight of the donations, , has no effect on the optimal resource allocation. The organization allocates all resources to advertisements due to the higher marginal benefit. In contrast, the optimal adoption fee is set to the lower bound when is zero, and to the uppe r bound when is larger than zero. This can be explained by the fact that when the organization is not receiving any funding through donations, it has to cover its expenses solely through adoption fees. The effect of the weight of donations, , for the mean dema nd rate problem is shown i n Figure 4 3. When the weight of the expected donations, , is zero, the
107 optimal solution is to expand the capacity by 100 units at a per unit cost of $1000 and allocate the rest of the resources to advertisements. The objective of the animal shelter in this case is to improve the impact measure and donations. As a result, the organization lowers the adoption fees to increase the mean demand rate and increases the capacity to improve its reputation. 188.8.131.52 The Effect of Adverti sements or Fundraising on the Mean Demand Rate (G, L) In Figure 4 4, we illustrate the changes in the optimal adoption fees for each impact measure with respect to increases in the effect of advertisements on the mean demand rate. For the mean waiting time and the mean adoption/rejection rate problems, the optimal adoption fee is $600 for any values of G while the optimal adoption fee is decreasing for the mean demand rate problem and increasing for the traffic intensity problem. In the mean demand rate pro blem, when G is zero, the animal shelter needs to increase adoption fees to cover expenses as the mean adoption rate is not as high. In the traffic intensity problem, when G is zero, the impact measure can only be improved by the adoption fee, hence the an imal shelter reduces the adoption fees. The difference between the mean demand rate and the traffic intensity problem is caused by the fact that an increase in advertisements or a decrease in adoption fees influence the mean demand rate impact measure by a single factor and the traffic intensity impact measure by factors. We observe similar results to the optimal solutions with changes to the effect of adoption fee on the mean demand rate.
108 184.108.40.206 Per Unit Cost of Operating a Primary Care Area (v ) In general, we observe that the animal shelter cannot expand the capacity due to the high costs of operating and construction. In the base case scenario where per unit cost of expansion is $1000, the changes in per unit operating costs has no effect on t he capacity expansion decision. To fully understand the effect of per unit operating costs, we reduce the per unit capacity expansion cost to $100. The results show that when per unit cost of capacity expansion is low, the capacity expansion size decreases with per unit operating costs up to a threshold. Beyond this threshold, the costs associated with expansion are too high and the organization invests all resources to advertisements. 220.127.116.11 Summary of the Results for Adoption Guarantee Shelter In genera l, capacity is introduced as a variable to the resource allocation problem for the adoption guarantee animal shelters, due to the high cost of construction and operating, the organization is better off allocating all of its resources to either advertisemen ts or fundraising activities. If the marginal benefit of advertisements is larger than the marginal benefit or fundraising activities then all of the resources will be allocated to advertisements and vice versa. The organization will invest in capacity exp ansion only under some circumstances when: (i) the effect of capacity on donations is larger than the effect of fundraising activities on donations, (ii) the weight of the third term in the objective function are zero, and (iii) both per unit cost of opera ting and per unit cost of capacity expansion are low. In the first scenario, the animal shelter receives more donations by expanding its capacity and consequently increasing visibility and reputation than through fundraising activities. This applies to org anizations with low fundraising efficiency. In the second case, when the weight of the second term in the
109 adoption fees. Thus, the animal shelter will set the adoption fees to the upper bound and expand capacity. An increase in the capacity is accompanied by an increase in the mean admittance rate, and consequently the mean adoption rate. In the case where the weight of the third objective is zero, the animal shelter wil l expand capacity as the cost of operating is not a factor in the decision and the marginal benefit of capacity is high. For the third case, the organization will expand capacity as much as possible up to threshold. Above this threshold, the costs associat ed with a single primary care area are too high. In general we see that risk free donations, the weight of the impact measures, per unit cost of capacity expansion, the initial reputation or the average reputation of all shelters does not influence optima l solutions. In the base case, per unit cost of capacity expansion is insignificant compared with the per unit operating cost of capacity. In order for the capacity expansion cost to be a significant factor, per unit operating cost should be low. For the o ther parameters, the organization will try to improve its performance regardless of the current values. The effect of advertisements, adoption fees and capacity on reputation influence the structure of the optimal solution differently. There is an explici t trade off between advertisements, fundraising and capacity expansion. When the marginal benefit of any of these is the highest, the organization will invest in that specific activity. In the case where the dominant effect is that of capacity on donations , the organization will invest in capacity up to the limit and allocate the rest of the resources to advertisements. The
110 magnitude of the adoption fees is insignificant compared to the advertisements and donations, as a result the effect of adoption fees on reputation are also insignificant. The effect of fundraising on donations follows a similar trend, when fundraising is dominant the animal shelter should invest all r esources to donations and vice versa. The weight of the donations only affects the monetary component of the objective function, therefore the base case resource allocation solution is still optimal. However, we find that the optimal adoption fee is at the upper bound when the weight of the through adoptions. Or if the organization does not put any importance on operating revenue, t hen they should expand the capacity to the limit and allocate the rest of the resources to advertisements. The results of the numerical experiments for each impact measure show similar trends for most of the cases. When we focus on the effect that the adv ertisements or adoption fees have on the mean demand rate, the results are different for each impact measure. We find that, when the effect of advertisements on the mean demand rate is zero, the animal shelter reduces adoption fees or increases adoption f ees for the traffic intensity and mean demand rate problems, respectively. These differences stem from the fact that for the traffic intensity problem, any increase in the mean demand rate has an effect multiplied by the capacity of the shelter. Whereas, an increase in the mean demand rate for the mean demand rate problem increases the impact measure linearly. 4.3.2 Model for Traditional Shelters In Section 4.3.2 we perform numerical experiments using real data to solve the resource allocation problem inc luding capacity as a decision variable. The resource
111 allocation problem including the capacity as a variable for the traditional shelter becomes: s.t Most of the results for the resource allocation problem are similar to the adoption guarantee case. Thus, we only discuss the cases specific to the traditional shelter case. 18.104.22.168 The Mean Similar to the mean demand rate of the adoption guarantee shelter, a change in the mean euthanization rate has no effect on the opti goal is to improve its performance regardless of the current depa rture rate. 22.214.171.124 The Mean Euthanization Rate Plus Mean Rejection Rate Problem In the adoption guarantee shelter problem, any animal that is accepted to the shelter is adopted. In the traditional shelter problem, the animals that are not adopted include euthanizations and rejections. In order to capture the animal population that is not rehomed we consider these together. The results for this impact measure followed a similar trend as the adoption guarantee shelter except for the weight of the donations t erm. In Figure 4 5, we see the effect of changes in the weight of the second term on the optimal solutions for the traditional shelter case. As increases, the adoption fees decrease and capacity increases. When donations are not dominant (i.e low weigh t),
112 the adoption fee is at the upper bound. In this case, the animal shelter is improving impact measures by allocating all resources to advertisements. When donations become dominant, the negative effect of a positive adoption fee becomes significant. As a result, the animal shelter reduces adoption fees to the lower bound. We find that the animal shelter increases capacity by 100 primary care areas beyond a certain threshold. Similar to the adoption fee case, as the donations become dominant, all of the factors that improve it also increase. The difference in the threshold values for the switches in adoption fees and capacity expansion is due to the difference between the and coefficients. Also, the animal shelter allocates all of its resources to advertisements up to a threshold. Beyond this threshold, the animal shelter allocates some resources to capacity expansion and the remainder to advertisements. 4.4.Conclu ding Remarks As a final part of our analysis, we reformulate the resource allocat ion problem for both types of shelters to include capacity as a decision variable. To gain further insights to this mixed integer problem , we perform numerous numerical experiments utilizing public data . In general, the results show that when the marginal benefit of advertisements is high, it is better to allocate all resources to advertisements and nothing to fundraising activities or capacity expansion. When the marginal benefit of fundraising is high, the animal shelter should allocate all resources to f undraising activities. Utilizing the realistic data, t here are several factors that have no effect on the capacity expansion or resource allocation decision of the animal shelter: risk free
113 donations, the importance of the impact measure to the organizatio n, per unit capacity expansion cost, initial reputation or the average reputation of all animal shelters. For all of the parameters except per unit capacity expansion cost and the importance of the impact measure, this result is expected. The organization will try to improve its performance regardless of these initial factors. When we consider the per unit capacity expansion cost, true capacity expansion cost includes the per unit operating cost as well. In general, per unit operating cost is much larger co mpared with the per unit capacity expansion cost. An animal shelter needs to consider both per unit capacity expansion cost and per unit operating costs before making the decision to add more primary care areas , and these costs appear to be prohibitive . In the case of the importance of the impact measure, the impact measure is not enough to change the resource allocation decision of an organization. The main goal of a non profit is t o provide continuous service, hence in certain cases covering expenses beco mes more important over impact measures. . We also note that the arrival rate of animals to a shelter does not affect the resource allocation structure of the organization for investments in advertisements, fundraising or capacity expansion . The arrival r ate, however affects the optimal adoption fees, because when the arrival rate is less than the departure rate, the organization can provide incentives for adoptees rather than charging a fee . In this scenario, the organization can cover its expenses from d onations when the arrival rate is high, donations are not sufficient enough to cover the expenses. The effect of advertisements, adoption fees and capacity influence the optimal solutions differen tly. In general, we see a trade of f between advertisements, fundraising
114 and capacity. If the marginal benefit of any of these is dominant, the organization should invest all of its resources on this specific activity. The effect of adoption fees on the reputation is not large enough compare d with advertisements, fundraising or capacity to influence the optimal resource allocation strategy. The effect of fundraising on donations follows a similar trend, when fundraising is dominant the animal shelter should invest all resources to donations a nd vice versa. The importance an animal shelter puts on a specific performance measure may differ depending on their particular mission . If an animal shelter believes that the donations are not as important to the organization as providing service, they w ill invest all their resources to advertisements and set the adoption fees to the maximum possible expenses. An other animal shelter may put importance only on the impa ct measures and donations. Since this type of donations, it will increase capacity, invest in advertisements and reduce adoption fees to improve reputation to receive the maximum amount of donations as possible. The influence of advertisements or adoption fees on the first term is different for each impact measure. We find that, when the effect of advertisements on the mean demand rate is zero, the animal shelter reduces adoption fees for the traffic i ntensity problem and increases adoption fees for the mean demand rate problem. These differences stem from the fact that for the traffic intensity problem, any increase in the mean demand rate has an effect multiplied by the capacity of the shelter. Wher eas, an increase in the mean demand rate for the mean demand rate problem increases the
115 impact measure linearly. Therefore, if a shelter is more concerned about utilization or efficiency measures (i.e. traffic intensity), then it should reduce the adoption fees. The animal shelter will invest in capacity expansion only when per unit operating and per unit capacity expansion costs are low or zero, or the effect of a large shelter on reputation is extremely high. The results to the traditional shelter probl em show similar trends except for one special case. If receiving donations is a dominant objective for the animal shelter, then it is optimal to allocate resources to capacity expansion and to advertisements. The organization can also provide incentives fo r adopters to improve impact measures.
116 Table 4 1. Summary of data Parameters Meaning Source Base Case Initial reputation of the organization Score from Charity Navigator (CN) 56.60 Average reputation of all shelters Mean score for all available shelters (CN) 56.60 The effect of reputation on donations Estimated using regression(990) 54288.9 8 Risk free donations Estimated using regression(990) 0 The effect of advertisement on reputation The effect of adoption fee on reputation The effect of capacity on reputation Estimated using regression(990) Estimated using regression(990) Estimated using regression(990) 9.76E 05 0.038 0.072 The effect of fundraising on reputation Fundraising revenue/fundraising expenses(line 8c/line 8b 990) 1.82 v Per unit cost of operating (maintaining) a primary care area Values from Part IX, Line24 / capacity(990, AA) 17854.98 m Budget Contributions (990) 3876470 G The effect of advertisements on mean demand rate Estimated using regression (AA) 0.015670769 L The effect of adoption fee on the mean demand rate Estimated using regression(AA) 4.242619088 The weight of the performance metric t, t 1 Arrival rate of animals Intake (AA) 3380 Departure rate of animals including euthanizations Live Release(AA) 1624 Current capacity of the shelter Max(Beginning Count, Ending Count) 114 Euthanization rate Euthanizations(AA) 651.3851 p The euthanization probability Euthanizations/Intake(AA) .12
117 Figure 4 1. The effect of advertisements on reputation vs optimal results Figure 4 2 . The effect of fundraising on donations 0 500000 1000000 1500000 2000000 2500000 3000000 3500000 4000000 4500000 0 4 8 12 The Effect of Advertisement on Reputation Advertisement Fundraising Adoption Fee 0 500000 1000000 1500000 2000000 2500000 3000000 3500000 4000000 4500000 2 6 10 14 Fundraising Donation Coefficient Advertisement Fundraising Adoption Fee
118 Figure 4 3 . The e ffect of the weight of donations Figure 4 4. Changes in G and the optimal adoption fees -300 -200 -100 0 100 200 300 400 500 600 700 0 1 2 3 W3 The Weight of Operating Revenue Fundraising Adoption Fee Capacity -300 -200 -100 0 100 200 300 400 500 600 700 0 1 2 3 ADOPTION FEE G The Effect of Advertisement on Mean Demand Rate Mean Demand Rate Mean Waiting Time Mean Adoption/Rejection Rate Traffic Intensity
119 Figure 4 5. The weight of donations vs optimal solutions -300 -200 -100 0 100 200 300 400 500 600 700 0 1 2 3 4 5 6 7 W2 The Weight of Donations Adoption Fee Capacity
120 CHAPTER 5 CONCLUSIONS In Chapter 5, we summarize the concluding remarks of our analyses from Chapters 2 4. In Chapter 2, we analyze the impact that production and emissions location decisions where the environmental regulations are not taken into consideration there is a trade off betwee n transportation costs and fixed costs. For these problems, a EUFLP3, we capture the impact of emissions taxes, regional production level environmental limits and gl obal transportation emissions regulations. Essentially, we find that when the global limit on transportation emissions is relatively low, then a more dispersed production network is optimal. A low regional production environmental limit should force the company into compliance. However, numerical results illustrate that with regards to the regional production environmental penalty, an increase in the lump sum dollar amount associated with the penalty is much more effective than a decrease in the actual li mit of damage tolerated. When non compliance becomes costly with a large fixed penalty, both the regional production environmental damage and the global transportation emissions are reduced. I n order to reduce the regional production environmental damage a nd transportation emissions the policy makers should choose intermediate limits but high penalties. Furthermore, the companies with low and medium pollution should consider dispersing their network to avoid penalties and reduce their costs. The companies w ith
121 high pollution should resort to other resources for compliance or take the risk of being penalized. In Chapter 3, we identify the optimal adoption fees, and the optimal resource allocation to advertisements and monetary donation increasing programs tha t maximize the impact metrics, the expected return on investment and the donations leftover after operating expenses are paid. We find that, in general, the optimal adoption fee is decreasing as a function of a , if the marginal benefit of advertisements is larger than the marginal benefit of funding. In this scenario, the organization is receiving enough funding as monetary donations by improving its reputation that it can now decrease the adoption fees. The optimal adoption fee is increasing as a function of a, if the marginal benefit of advertisements is high. An animal shelter will increase adoption fees as a response to cover up expenses, and it will decrease fees as donations become sufficient enough to support the expenses. The monetary gain from a is not sufficient to cover up the expenses of a large shelter as the effect of a on donations is indirect (through increasing reputation). Hence, the organization must increase adoption fees. In addition to the dynamics of the optimal adoption fee as a functi on of a and k, we also identify scenarios where the animal shelter may provide incentives for adopters. The optimal resource allocation to advertisement with approximation is decreasing as a function of k. Advertisements and capacity both increase the rep utation of the organization and the mean demand rate, if the marginal benefit of a is large, one unit increase in a will have a significant impact on the objective function especially with a large capacity. Hence, the organization can spare more resources to invest in fundraising. We also observe that is decreasing as a function of the initial mean
12 2 demand rate, enough that the effect of advertisements is insignificant. In Chapter 4, we reformulate the resource allocation problem for both types of shelters to include capacity as a decision variable. To gain further insights to this mixed integer problem, we perform numerous numerical experiments utilizing public data. In general, the results show that when the marginal benefit of advertisements is high, it is better to allocate all resources to advertisements and nothing to fundraising activities or capacity expansion. When the marginal benefit of fundraising is high, the animal shelte r should allocate all resources to fundraising activities. The effect of advertisements, adoption fees and capacity influence the optimal solutions differently. In general, we see a trade off between advertisements, fundraising and capacity. If the marginal benefit of any of these is dominant, the organization should invest all of its resources on this specific activity. The animal shelter will invest in capacity expansion only when per unit operating and per unit capacity expansion costs are low or zero, or the effect of a large shelter on reputation is extremely high. The results to the traditional shelter problem show similar trends except for one special case. If receiving donations is a dominant objective for the animal shelter, then it is optima l to allocate resources to capacity expansion and to advertisements. The organization can also provide incentives for adopt ers to improve impact measur e.
123 APPENDIX A EUFLP DATA Figure A 1. Plant size versus emissions Table A 1. Summary of parameter e stimates Exchange rate Unit sales (mils units) North America Sales (mils units) Capital Exp CAPEX (mils $) Expenses /Output ($/unit) (Net PPE Capex) /Output($/unit) Total Release /output Revenue /output ($/unit) 2010 11.8 General Motors N/A 2.21 N/A 4012.00 58634.10 6887.38 0.25 61334.24 Ford Motor N/A 5.52 N/A 4092.25 55312.53 3455.24 0.51 23346.13 Toyota Motor (87.78 •/$) 7.24 2.10 7169.36 29598.37 8940.58 0.48 30790.89 Honda Motor (87.78 •/$) 3.51 1.46 3547.05 27140.91 5280.84 0.55 28989.13 Hyundai Motor 5.79 1.07 3545.13 15693.97 4380.08 0.08 16822.83 2009 10.60 General Motors N/A 2.04 N/A 5431.00 61685.79 15117.96 0.22 51361.77 Ford Motor N/A 4.87 N/A 4059.00 24602.34 3817.92 0.51 23897.04 Toyota Motor (93.57 •/$) 7.57 2.21 6460.79 29644.47 8623.87 0.35 30065.75 Honda Motor (93.57 •/$) 3.39 1.30 3523.94 25884.29 5535.56 0.52 27030.43 Hyundai Motor 4.95 0.89 3303.68 15220.74 6172.25 0.06 14464.60 2008 13.50 General Motors N/A 2.98 N/A 7530.00 57097.95 10776.92 0.22 49976.18 Ford Motor N/A 5.53 N/A 6492.00 28795.37 3990.06 0.49 25955.17 Toyota Motor ( 103.36 •/$) 8.91 2.96 14324.40 26069.36 8478.91 0.30 26241.32 Honda Motor ( 103.36 •/$) 3.52 1.50 5796.55 27018.25 4260.02 0.62 27539.94 Hyundai Motor 4.21 0.80 3949.80 14464.76 4950.33 0.12 17167.26 y = 0.4323x 63660 R² = 0.7633 y = 169650e 6E 07x R² = 0.5722 0 5 10 15 20 25 30 35 40 0.00 10.00 20.00 30.00 40.00 50.00 60.00 Total Release(Pounds) x 100000 Production (Sales) (units) x 100000 Linear and Exponential
124 2007 16.50 General Motors N/A 3.87 N/A 7542.00 47657.87 9173.78 0.27 46543.57 Ford Motor N/A 6.56 N/A 6022.00 27116.86 4609.76 0.61 23551.33 Toyota Motor (117 •/$) 8.52 2.94 1429.55 21766.32 6354.84 0.40 24010.88 Honda Motor (117 •/$) 3.93 1.85 5590.00 24061.68 4793.51 0.52 26137.15 Hyundai Motor 2.33 0.47 3483.97 12594.82 9403.05 0.21 32116.57 2006 17.10 General Motors N/A 4.13 N/A 7902.00 50979.39 4361.21 0.28 49567.76 Ford Motor N/A 6.70 N/A 6848.00 26435.42 9939.60 0.69 23901.00 Toyota Motor (116.30 •/$) 7.97 2.56 13099.39 20657.44 5976.84 0.44 22682.73 Honda Motor (116.30 •/$) 3.65 1.79 5391.80 24098.41 3417.87 0.55 26104.12 Hyundai Motor 2.36 0.46 4669.68 30187.24 11590.18 0.20 31475.17 2005 17.50 General Motors 4.52 N/A 8179.00 46377.82 7090.53 0.34 42630.37 Ford Motor 6.77 N/A 7516.00 26364.42 4900.40 0.86 26131.96 Toyota Motor (110.22 •/$) 7.41 2.27 594.59 20671.50 7017.39 0.51 22719.36 Honda Motor (110.22 •/$) 3.39 1.68 2610.61 24553.74 4086.96 0.85 26509.26 Hyundai Motor 2.40 0.46 4124.27 27506.05 10447.00 0.95 28782.89
125 APPENDIX B RESOURCE ALLOCATION CALCULATIONS Proof of Proposition 3: We first prove that the numerator , is nonnegative concave, and the denominator , k is positive and linear. Therefore , is semistrictly quasi concave . We then show that decreasing as a function of for any = Where, = .(Jag erman 1974(23)) Without loss of generality, we remove to simplify expressions when unnecessary.
126 is strictly negative for k >0 and it vanishes for k =0. (See Cardoso (2009), Harel (1990)). Harel (1990) showed that is convex in , we can represent using the methodology from Jagerman (1974). from convexity of the blocking probability. (Harel, 1990). if This shows that traffic intensity is decreasing as a function of , when . We know that: is concave in , is positive linear, the function is semi strictly quasiconcave (Avriel et al., 1988). Adoption Guarantee Shelter Solutions Mean Demand Rate Problem We can write the Lagrange an for the mean demand rate as follows: We can write the KKT conditions and the second order conditions as below:
127 , ( 3 ) ( 4 ) , d ( 5 ) , [ ] = 0 ( 6 ) The bordered Hessian is: is quasi concave when the Bordered Hessian is negative semidefinite if: Insert ( 6 ) into ( 3 ) and get: ( 7 ) Insert ( 7 ) into ( 4 ) :
128 We utilize a heavy traffic approximation from Shakhov (2010): Mean Waiting Time Problem We can write the objective function: We can write the KKT conditions and the second order conditions as below: , ( 8 ) ( 9 ) , d ( 10 ) , [ ] = 0 ( 11 ) The Bordered Hessian is: is quasi concave when the Bordered Hessian is negative semidefinite if:
129 Insert ( 10 ) into ( 8 ) to get: ( 12 ) We know that . Thus, . Insert ( 12 ) into ( 9 ) : We complete our analysis using the heavy traffic approximation from Shakhov (2010):
130 Mean Rejection Rate Problem We can write the Lagrange an for the mean adoption rate as follows: We can write the KKT condit ions and the second order conditions as below: , ( 13 ) ( 14 ) , d ( 15 ) , ( 16 ) The Bordered Hessian is: is quasi concave when the Bordered Hessian is negative semidefinite if:
131 Insert ( 15 ) into ( 13 ) to get: ( 17 ) Insert ( 17 ) into ( 14 ) to get the following equation: Solving this equation we get the exact results Traffic Intensity Problem We can write the KKT conditions and the second order conditions as below: , ( 18 ) ( 19 ) , ( 20 ) , [ ] = 0 ( 21 ) if
132 The Bordered Hessian: is quasi concave when the Bordered Hessian is negative semidefinite if: Equations ( 18 ) and ( 19 ) are first order differential equation with quadratic denominators, thus the exact solution are not shown. We can use the heavy traffic approximation and find: ( 22 ) Insert ( 22 ) into ( 19 ) to find:
133 Mean Adoption Rate Problem We omit this calculation as minimizing is very similar to maximizing . Traditional Shelter Solutions Mean Demand Rate Problem We can write the Lagrange an for the mean demand rate as follows: We can write the KKT conditions and the second order conditi ons as below: , ( 23 ) , , [ ] = 0 The Bordered Hessian:
134 is quasi concave when the Hessian is negative semidefinite if: Insert Error! Reference source not found. into Error! Reference source not found. to get: Insert ( 28 ) into Error! Reference source not found. to get the following equation: We can then use heavy traffic approximation to obtain the results.
135 We utilize a heavy traffic approxima tion from Shakhov (2010): Mean Waiting Time Problem We can write the objective function: We can write the KKT conditions and the second order co nditions as below: , ( 24 ) ( 25 )
136 , d ( 26 ) , [ ] = 0 ( 27 ) The Bordered Hessian is: is quasi concave when the Bordered Hessian is negative semidefinite if: , We know that .
137 We complete our analysis using the heavy traffic app roximation from Shakhov (2010): Mean Effective Euthani zation Rate Plus Mean Rejection Rate We can write the KKT conditions and the second order conditions as below: , , , [ ] = 0
138 if The Bordered Hessian: is concave when the Hessian is negative semidefinite if:
139 ( 28 ) Where We can then use heavy traffic approximation to obtain the results. Traffic Intensity We can write the KKT conditions and the second order conditions as below: , , , [ ] = 0
140 + The Bordered Hessian: is concave when the Hessian is negative semidefinite if:
141 We can then use heavy traffic approximation to obtain the results.
142 Mean Adoption Rate We can write the KKT conditions and the second order conditions as below: , , , [ ] = 0 The Bordered Hessian: is concave when the Hessian is negative semidefinite if:
143 We can then use heavy traffic app roximation to obtain the results.
144 APPENDIX C CHARACTERISTICS OF THE DERIVATIVES OF THE BLOCKING PROBABILITY We can find the first and second order conditions of B: is strictly negative for x>0 and it vanishes for x=0. (The same as but multiplied by G) Harel (1990) showed that is convex in , we can represent using the methodology from Jagerman (1974). And for f we have: if since is strictly convex and decreasing as a function of k.
145 APPENDIX D COMPLETE DATA AND RESULTS Table D 1. Form 990 data Name Contributio ns Fundraising Income Program Service Expenses Highest Adoption Fees Capacit y F tilde Arizona Animal Welfare League 3232146 29670 84980 350 103 4.957916 7 Arizona Humane Society 10627266 796322 210783 200 1745 52.17 Berkeley East Bay Humane Society 886202 0 109381 175 13 34.97 East Bay SPCA 3317673 0 25250 150 89 64.16 Humane Society of Silicon Valley 6040482 413649 46325 350 81 56.1 Humane Society of Boulder Valley 2195486 17910 6800 5 99 102 49.84 Longmont Humane Society 1248359 97969 21475 295 99 37.65 Animal Rescue Foundation 4592358 697600 391908 175 9 60.34 Jacksonville Humane Society 2901660 78252 217976 125 93 51.31 Humane Society of Tampa Bay 2300281 129163 31346 150 9 67.2 Animal Welfare League 1472755 27287 12641 135 751 56.46 Anti Cruelty Society 3982051 10651 168985 170 106 58.68 Tree House Humane Society 4681229 52565 91157 85 0 65.86 Last Chance Animal Rescue 1095700 0 373953 100 25 36.99 Nevada Humane Society 1894871 164740 131348 150 193 51.45 Animal Humane New Mexico 2555674 0 46392 150 135 64.69 Erie County SPCA 5815485 116195 319670 350 170 65.89 Humane Society of Greater Dayton 1067935 163142 79958 300 46 53.44 Society for the Improvement of Conditions for Stray Cats 548737 114489 58172 216 33 65.04 Cat Adoption Team 702413 16973 70596 125 0 50.02 Oregon Humane Society 9027061 695759 338714 100 107 65.28 Dane County Humane Society 2133050 38076 4297 350 61 63.5 San Diego Humane Society 13725255 532326 49120 195 328 62.62 Animal Friends Rescue Project 6991153 422207 164174 150 69 63 Table D 2. Mean Demand Rate Regression Data Adoptions Advertisement Adoption Fee 2011 2010 Difference NAME OF ORGANIZATION: The Haven for Animals 3 158 155 4780 250 AME OF ORGANIZATION: Berkeley East Bay Humane Society 1446 256 1190 109381 175 NAME OF ORGANIZATION: Home At Last Animal Rescue 3378 1307 2071 90 200 NAME OF ORGANIZATION: Purrfect Cat Rescue 2909 1099 1810 1140 125 AME OF ORGANIZATION: Sunshine Rescue 0 179 179 0 100 AME OF ORGANIZATION: Tri Valley Animal Rescue 367 1405 1038 35698 200 AME OF ORGANIZATION: Valley Humane Society 1 5741 5740 0 150 NAME OF ORGANIZATION: Grateful Dogs Rescue 460 1555 1095 754 250
146 NAME OF ORGANIZATION: Adams County Animal Control 17 535 518 280 100 Funding Regression Results SUMMARY OUTPUT Regression Statistics Multiple R 0.849765336 R Square 0.722101125 Adjusted R Square 0.625590777 Standard Error 2993605.49 Observations 23 ANOVA df SS MS F Regression 4 4.4244E+14 1.11E+14 12.34255 Residual 19 1.70272E+14 8.96E+12 Total 23 6.12711E+14 Coefficients Standard Error t Stat P value Intercept 0 #N/A #N/A #N/A 84980 5.297596507 5.142603223 1.030139 0.315881 350 2082.739242 5788.149713 0.35983 0.722944 103 3920.35509 1695.2334 2.312575 0.032113 4.957916667 54288.97787 27520.91611 1.972644 0.063267 Mean Demand Rate Regression Results SUMMARY OUTPUT Regression Statistics Multiple R 0.294851644 R Square 0.086937492 Adjusted R Square 0.231906259 Standard Error 2602.449591 Observations 8
147 ANOVA df SS MS F Regression 2 3869211.77 1934606 0.285646 Residual 6 40636463.23 6772744 Total 8 44505675 Coefficients Standard Error t Stat P value Intercept 0 #N/A #N/A #N/A 4780 0.015670769 0.025809413 0.607173 0.565989 250 4.242619088 6.175747382 0.68698 0.517752 Complete Numerical Results Mean Demand Lambda 1000 1400 1800 2200 a 3874889.344 3874889.344 3874889.344 3874889.344 d 0 0 0 0 f 200 200 200 600 k1o 0 0 0 0 Z 19227168.34 19147169.78 19067169.78 19093387.89 Mean Waiting Lambda 1000 1400 1800 2200 a 3874889.344 3874889.344 3874889.344 3874889.344 d 0 0 0 0 f 200 200 200 600 k1o 0 0 0 0 Z 19227168.34 19082612.32 19002612.61 19032227.09 Mean Adoption Lambda 1000 1400 1800 2200 a 3874889.344 3874889.344 3874889.344 3874889.344 d 0 0 0 0 f 200 200 200 600 k1o 0 0 0 0 Z 19084012.32 19004412.61 19034427.09 Traffic Lambda 1000 1400 1800 2200
148 a 5546194.859 3874889.344 3874889.344 3874889.344 d 0 0 0 0 f 300 200 200 600 k1o 0 0 0 0 Z 19082612.32 19002612.61 19032227.09 Mean Demand Rate Qa 0 4 8 12 a 61361.65546 3876454.007 3876465.436 3876461.5 d 3815070.682 0 4.33256366 0 f 600 600 600 600 k1o 0 0 0 0 Z11 6143896.112 8.41794E+11 1.68359E+12 2.52539E+12 Mean Waiting Time Qa 0 4 8 12 a 62968.92954 3876454.007 3876465.435 3876461.5 d 3813465.203 0 4.332450288 0 f 600 600 600 600 k1o 0 0 0 0 Z21 6140854.146 8.41794E+11 1.68359E+12 2.52539E+12 Mean Rejection Rate Qa 0 4 8 12 a 58602.03873 3876454.007 3876465.435 3876461.5 d 3817830.603 0 4.332450288 0 f 600 600 600 600 k1o 0 0 0 0 Z31 6152179.181 8.41794E+11 1.68359E+12 2.52539E+12 Traffic Intensity Qa 0 4 8 12 a 56427.49969 3876454.007 3876465.435 3876461.5 d 3818014.238 0 4.33254448 0 f 600 600 600 600 k1o 0 0 0 0 Z41 6121069.042 8.41794E+11 1.68359E+12 2.52539E+12 Mean Demand Rate Qf 2 6 10 14 a 3818745.237 3818745.237 3818745.237 3818745.237
149 d 0.054532664 0.054532664 0.054532664 0.054532664 f 199.905357 199.905357 199.905357 199.905357 k1o 33 33 33 33 Z11 4.46361E+12 4.46364E+12 4.46368E+12 4.46371E+12 Mean Waiting Time Qf 2 6 10 14 a 3876464.103 3876464.103 3876464.103 3876464.103 d 3.156501299 3.156501299 3.156440845 3.156538937 f 600 600 600 600 k1o 0 0 0 0 Z21 4.53107E+12 4.5311E+12 4.53112E+12 4.53115E+12 Mean Rejection Rate Qf 2 6 10 14 a 3876464.103 3876464.103 3876464.103 3876464.103 d 3.156392195 3.156392195 3.156398846 3.156435503 f 600 600 600 600 k1o 0 0 0 0 Z31 4.53107E+12 4.5311E+12 4.53112E+12 4.53115E+12 Traffic Intensity Qf 2 6 10 14 a 3876464.103 3876464.103 3876464.103 3876464.103 d 3.156513028 3.156513028 3.156300363 3.156341207 f 600 600 600 600 k1o 0 0 0 0 Z41 4.53107E+12 4.5311E+12 4.53112E+12 4.53115E+12 Mean Demand Rate Qd 2 6 10 14 a 3.8749E+06 66698.45 502.4242 350.9301 d 0.0000E+00 3809735 3875961 3876117 f 6.0000E+02 600 389.3422 388.7282 k1o 0.0000E+00 0 0 0 Z11 19801390.39 22412371 37684767 53184405 Mean Waiting Time Qd 2 6 10 14 a 3.8749E+06 1145.403 6.235134 0.012138
150 d 0.0000E+00 3875307 3876464 3876470 f 6.0000E+02 200 387.4443 387.4216 k1o 0.0000E+00 0 0 0 Z21 19740227.15 21408129 37684661 53190540 Mean Rejection Rate Qd 2 6 10 14 a 3.8749E+06 1145.403 115.4886 0 d 0.0000E+00 3875307 3876355 3876470 f 6.0000E+02 200 387.8547 387.4197 k1o 0.0000E+00 0 0 0 Z31 19743607.15 21408961 37688069 53193919 Traffic Intensity Qd 2 6 10 14 a 3.8749E+06 314103.6 2042.342 237.226 d 0.0000E+00 3562366 3874427 3876226 f 6.0000E+02 600 395.0923 161.0943 k1o 0.0000E+00 0 0 0 Z41 19740227.15 22238856 37685186 53180416 Mean Demand Rate w2 0 1 2 3 a 3876470 3876469.201 3876469.201 3876469.201 d 0 0 0 0 f 600 200 200 200 k1o 0 0 0 0 Z11 53694 9.06214E+12 9.06214E+12 9.06214E+12 Mean Waiting Time w2 0 1 2 3 a 3.7838E+06 3876469.201 3876469.201 3876469.201 d 0.0000E+00 0 0 0 f 6.0000E+02 200 200 200 k1o 0.0000E+00 0 0 0 Z21 7467.72002 9.06214E+12 9.06214E+12 9.06214E+12 Mean Rejection Rate w2 0 1 2 3 a 3.5247E+06 3876469.201 3876469.201 3876469.201 d 0.0000E+00 0 0 0
151 f 6.0000E+02 200 200 200 k1o 0.0000E+00 0 0 0 Z31 4087.7 9.06214E+12 9.06214E+12 9.06214E+12 Traffic Intensity w2 0 1 2 3 a 5.7156E+04 3876469.201 3876469.201 3876469.201 d 1.6269E+06 0 0 0 f 6.0000E+02 200 200 200 k1o 0.0000E+00 0 0 0 Z41 7467.16882 9.06214E+12 9.06214E+12 9.06214E+12 Mean Demand Rate w3 0 1 2 3 4 a 3776466.843 3876464.103 3876464.103 3876464.103 3876464.103 d 3.156586616 0.108108201 0.108108201 3.156549778 3.156531635 f 200 600 600 600 600 k1o 100 0 0 0 0 Z11 4.53106E+12 4.53107E+12 4.53107E+12 4.53106E+12 4.53106E+12 Mean Waiting Time w3 0 1 2 3 4 a 3776466.843 3876469.533 3876469.533 3876464.103 3876464.103 d 3.156361622 0.108108201 0.108108201 3.156391181 3.156391181 f 200 600 600 600 600 k1o 100 0 0 0 0 Z21 4.53106E+12 4.53107E+12 4.53107E+12 4.53106E+12 4.53106E+12 Mean Rejection Rate w3 0 1 2 3 4 a 3776466.843 3876469.533 3876469.533 3876464.103 3876464.103 d 3.15643701 0.108108201 0.108108201 3.156571111 3.156489062 f 200 600 600 600 600 k1o 100 0 0 0 0 Z31 4.53106E+12 4.53107E+12 4.53107E+12 4.53106E+12 4.53106E+12 Traffic Intensity w3 0 1 2 3 4 a 3776466.843 3876464.103 3876464.103 3876464.723 3876466.547 d 3.156307084 3.156226038 3.156226038 0 0 f 200 600 600 600 600
152 k1o 100 0 0 0 0 Z41 4.53106E+12 4.53106E+12 4.53106E+12 4.53106E+12 4.53107E+12 Mean Demand Rate G 0 1 2 3 a 3766026.959 3818915.856 3818915.856 3818915.856 d 73480.5 0.054532664 0.054532664 0.054532664 f 130.414 199.905357 199.905357 199.905357 k1o 28 33 33 33 Z11 4.40198E+12 4.4638E+12 4.46365E+12 4.46364E+12 Mean Waiting Time G 0 1 2 3 a 3132990.015 3132990.015 3132990.015 3132990.015 d 505594.9465 505594.9465 505594.9465 505594.9465 f 590.5457757 590.5457757 590.5457757 590.5457757 k1o 79 79 79 79 Z21 3.66204E+12 3.66204E+12 3.66205E+12 3.66205E+12 Mean Rejection Rate G 0 1 2 3 a 3132990.015 3132990.015 3132990.015 3132990.015 d 505594.9465 505594.9465 505594.9465 505594.9465 f 590.5457757 590.5457757 590.5457757 590.5457757 k1o 79 79 79 79 Z31 3.66204E+12 3.66204E+12 3.66204E+12 3.66204E+12 Traffic Intensity G 0 1 2 3 a 3132990 3133159.405 3133159.405 3133159.405 d 505594.9465 505594.9465 505594.9465 505594.9465 f 390.1829471 504.4265378 504.4265378 504.4265378 k1o 79 79 79 79 Z41 3.66204E+12 3.66224E+12 3.66214E+12 3.66211E+12
153 LIST OF REFERENCES Abilock, H., L. B. Fishbone. 1979. User's Guide for MARKAL (BNL Version) . Brookhaven National Laboratory, Upton, N. Y. Angell, L.C., R.D. Klassen. 1999. Integrating environmental issues into the mainstream: an agenda for research in operations management. Journal of Operations Management 17( 5) 575 598. Anonymous. 2010. Measuring Fundraising Return on Investment and the Impact of Prospect Research: Factors to Consider. A WealthEngine White Paper. < http:/ /info.wealthengine.com/rs/wealthengine/images/Return on Investment May 2010.pdf >. Anonymous. 2013. Coming home; Reshoring manufacturing. The Economist 19 Jan. 2013. Anonymous. 2014. Scope of the Nonprofit Sector. < http://www.independentsector.org/scope_of_the_sector > . Avriel,M., Diewert, W.E., Schaible, S., and Zang, I. Generalized Concavity . Siam, 2010. Bean, J. C., J.L. Higle and R.L. Smith. 199 2. Capacity Expansion under Stochastic Demands Operations Research 40( 2) 210 216. Cardoso, D.M., Craveirinha, J. And Esteves, J.S. (2009). Second Order Conditions on the Overflow Traffic Function From the Erlang B System: A Unified Analysis. Journal of Mathematical Sciences , Vol 161, No.6. Chabotar, K. J. (1989). Financial ratio analysis comes to nonprofits. Journal of Higher Education , 60(2), 188 208 . Chandra, S. (1972). Strong Pseudo Convex Programming. Indian Journal of Pure and Applied Math, Vol. 3, No. 2. Chang, C. F., & Tuckman, H. P. (1991a). A methodology for measuring the financial vulnerability of charitable nonprofit organizations. Nonprofit and Voluntary Sector Quarterly , 20. 445460. Chang, C. F., & Tuckman, H. P. (1991b). Financia l vulnerability and attrition as measures of nonprofit performance . Annals of Public and Cooperative Economics , 62(4), 655 672. Chen, C., G.E. Monahan. 2010. Environmental safety stock: The impacts of regulatory and voluntary control policies on producti on planning, inventory control, and environmental performance. European Journal of Operational Research , 207 (1280 1292).
154 Col, J.1996. USA Latitude and Longitude Activity . Enchanted Learning. < http://www.E nchantedLearning.com >. Corbett, C.J., F.J.C. Debets, L.N Van Wassenhove. 1995. Decentralization of responsibility for site decontamination projects: A budget allocation approach. European Journal of Operational Research 86, 103 119. Corbett, C.J., P.R. Kleindorfer, 2001. Environmental Management and Operations Management: Introduction to Part 1(Manufacturing and Eco logistics). Production and Operations Management 10(2), 107 111. Corbett, C.J., R.D. Klassen.2006. Extending the Horizons: Environmental Excellence as Key to Improving Operations. Manufacturing and Service Operations Management 8(1) 5 22. Dangayach,G.S., S.G. Deshmukh. 2001. Manufacturing strategy: Literature review a nd some issues. International Journal of Operations and Production Management 21(7) 884 932. EPA Office of Chief Financial Officer.2012. National Enforcement Trends. < http://www.ep a.gov/oig/reports/2012/20111209 12 P 0113.pdf >. Efroymson, M.A. , T.L. Ray. 1966. A Branch and Bound Algorithm for Plant Location. Operations Research 14(3) 361 368. Erlenkotter, D. 1978. A Dual Based Procedure for Uncapacitated Facility Location. Operat ions Research 26( 6) 992 1009. GM Sustainability Report. 2012. < http://www.gm.com/vision/environment1/our_commitment.html >. Gray, W., R. Shadbegian. 1993. Environmental Regulation and Manufacturing Productivity at the Plant Level. Center for Economic Studies. UC San Diego. Feldstein, M.S. 1971 . Economic Analysis for Health Service Efficiency , Amsterdam 1967. Fomundan, S. and Herrmann, J. (2007). A Survey of Queuing Theory Appli cations in Healthcare. ISR Technical Report 2007 24 , The Institute for Systems Research (The University of Maryland). Gronbjerg, K. A. (1990). Managing nonprofit funding relations: Case studies of six service organizations (PONPO Working Paper No. 156, lSPS Working Paper No. 2156). New Haven, CT: Yale University, Program on Non Profit Organizations.
155 Gronbjerg, K. A. (1991a). Managing grants and contracts: The case of four nonprofit social service organizations . Nonprofit and Voluntary Sector Quarterly , 20. 5 24. Gronbjerg, K. A. (1991b). How nonprofit human service organizations manage their funding sources: Key findings and policy implications. Nonprofit Management and Leadership , 2, 159 176. Gross,D. and Harris, C.C. F undamentals of queueing th e ory (2nd e d.). John Wiley & Sons, Inc., New Y ork, NY, USA, 1985. Hansmann, H. (1980). The Role of Nonprofit Enterprise. The Yale Law Review, 89 835 899. Harel, A. (1990). Convexity Properties of the Erlang Loss Formula. Operations Research , Vol. 38, No 3 (May June 1990), pp. 499 505. Holmberg, K. 1994. Solving the staircase cost facility location problem with decomposition and piecewise linearization. European Journal of Operational Research 75, (41 61). Jacobse, A.J. , P.P.G. Wolbert. 1988. Saneringsproject en saneringstechniek: Een keuzeprobleem . Agricultural University, Wageningen (in Dutch). Jaffe, A.B. , S.R. Peterson, P. R. Portney, R.N. Stavins. 1995. Environmental Regulation and the Competitiveness of U.S. Manufacturing: What Does Ev idence Tell Us?. Journal of Economic Literature 33( 1) 132 163. Jagerman, D.L. (1974). Some Properties of the Erlang Loss Function. The BELL System Technical Journal, Vol. 53, No. 3. James, E., and Neuberger, E., The University Department as a Non Pr ofit Labor Cooperative. Public Choice , 36 (1981): 585 61. James, E. (1983). How Nonprofits Grow: A Model. Journal of Policy Analysis and Management , Vol 2, No. 3, pp. 350 365. Kingma, B.R. (1993). Portfolio Theory and Nonprofit Financial Stability. Nonprofit and Voluntary Sector Quarterly 22:105 . Economics and The Environment . John Hopkins Press. Klassen, R.D. , S. Vachon, 2003. Collaboration and Evaluation in the Supply Chain: The Impact on Plant Level Environmental Investment. Production and Operations Management 12(3), 336 352.
156 Kraft, T., Erhun,F. Carlson, R.C. and Rafinejad,D. 2013. Replacemen t Decisions for Potentially Hazardous Substances. Production and Operations Management 22(4), 958 975. Krarup, J., Pruzan., P. (1983). The Simple Plant Location Problem: Survey and synthesis. European Journal of Operational Research , 12, 366 381. Lee, M.L. (1971). A Conscious Production Theory of Hospital Behavior. Southern Economic Journal 38: 48 59. Leszczyc, P.T.L.P. and Rothkopf, M.H. (2010). Charitable Motives and Bidding in Charity Auctions. Management Science , Vol. 56, No. 3, pp 399 413. Licata, A. , H.U. Hartenstein and L. Terraciano. 1992. Comparison of U.S. EPA and European Emissions Standards for Combustion and Incineration Technologies . < ht tp://www.seas.columbia.edu/earth/wtert/sofos/nawtec/nawtec05/nawtec05 48.pdf >. Lien, R.W.,Iravani, S.M.R. and Smilowitz, K.R. (2013), Sequential Resource Allocation for Nonprofit Operations. Operations Research , Articles in Advance, pp 1 17. Manne, A.S. 1961. Capacity Expansion and Probabilistic Growth. Econometrica 29, 632 649. Manne, A.S. 1967. Investments for Capacity Expansion: Size, Location, and Time Phasing . MIT Press, Cambridge, Mass. Manne, A.S, R. G. Richels. 1992. Buying Greenhouse Insuranc e: The Economic Costs of Carbon Dioxide Emission Limits. MIT Press, Cambridge, Mass. Melo, M.T. , S. Nickel, F. Saldanha da Gama.2009. Facility location and supply chain management A review. European Journal of Operational Research 196( 2) 401 412. Ni euwenhuis, P. From banger to classic a model for sustainable car consumption?. 2008. International Journal of Consumer Studies 32(6) 648 655. Newhouse, J.P. 1970. Toward a Theory of Nonprofit Institutions: An Economic Model of a Hospital. The American Economic Review , Vol. 60, No. 1, pp 64 74. Niskanen, W. (1971). Bureaucracy and Representative Government. Chicago : Aldine Publishing Co. Pallabi, M. and Amit, C. Reneging in Queues Without Waiting Space. International Journal of Research in Applied, Natural and Social Sciences , Vol. 1, Issue 3, pp 11 124.
157 Parry, D.G. (2000). The Behavioral Implications of Long Term Shelter Cat Stays and http://www.maddiesfund.org/Maddies_Institute/Articles/The_Behavioral_Implicati ons_of_Long_Term.html >. Pashigian, B.P. 1984. The Effect of Environmental Regulation on Optimal Plant Size and Factor Shares. Journal of Law and Economics 27( 1) 1 28. R.J. Shadbegian, W.B. Gray.2005. Pollution abatement expenditures and plant level productivity: A production function approach, Ecological Economics , Volume 54, Issues 2 3, (196 208). Shah, H. , B. Roberts. October 2010. Motors Liquidation Company (f/k/a General Motors (GM) Corporation) Bankruptcy Settlement. and Settlements . < http://www2.epa.gov/enforcement/case summary 2010 mlc general motors bankruptcy settlement >. Shakhov, V. (2010). Simple Approximation for Erlang B Formula. IEEE Region 8 SINIRCON 2010 , Irkutsk Listvanya, Russia, July 11 15, 2010. Shirouzu, N. (2011, May 11). GM bets large on rural china markets --success of microvans prompts push to manufacture other no frills vehicles for emerging consumers. Wall Street Journal . Retrieved from < http://search.proquest.com/docview/865682813?accountid=10920 > . Snir, E.M,2001. Liability as a Catalyst for Product Stewardship. Production and Operations Management 10(2), 190 206. Terlep, S. (2012, Feb 06). Target at post bail out GM: Earning $10 billion a year. Wall Street Journal (Online) . Retrieved from http://search.proquest.com/docview/919863614?accountid=10920. . Thorn, S. June 2012. How does EPA calculate its penalties? IAEP Network 40(1). < http://www.thornenvironmentallaw.com/blog/how does epa calculate its penalties > . Tullock, G. (1966). Information without Profit. In Papers of Non Market Decision Making . Edited by Gordon Tullock. Charlottesvilee: Thomas Jefferson Center for Political Economy, University of Virginia. Verheyen, P.(1998). The Missing Link in Budget Models of Nonprofit Institutions: Two Practical Dutch Applications. Management Scienc e, Vol 44, No. 6
158 Verter, V., C. Dinçer .1992. An Integrated Evaluation of Facility Location, Capacity Acquisition and Technology Selection for Designing Global Manufacturing Strategies. European Journal of Operational Research 60( 1) 1 18. Verter, V., C . Dinçer.1995. Facility Location and Capacity Acquisition: An Integrated Approach. Naval Research Logistics , 42 1141 1160. Weisbrod, B. (1975). Toward a Theory of the Voluntary Non Profit Sector in a Three Sector Economy. In Altruism, Morality and Econom ic Theory . (Edmund Phelps, ed.). pp 171 195. New York: Russell Sage Foundation. Weisbrod,B. (1979) "Economics of Institutional Choice" (Draft, University of Wisconsin) Wells, P.,R.J. Orsato. 2008. Redesigning the Industrial Ecology of the Automobile. Jo urnal of Industrial Ecology . 9(3) 15 30. Wheelwright,S.C., R.R. Hayes. January 1985. Competing Through Manufacturing. Harvard Business Review 63(1) 99 109.
159 BIOGRAPHICAL SKETCH Nazli Turken is a PhD student in Information Systems and Operations Management in University of Florida. Her primary research interests are green supply chains, nonprofit operations, humanitarian operations and healthcare operations. Her recent research has been publi shed in Springers H andbook of Newsvendor Problems: Model s, Extensions and Applications. She is also a member of the Institute for Operations Research and the Management Sciences(INFORMS), Manufacturing & Service Operations Management Society (MSOM), and Production and Operations Management Society (POMS). |
Dirt Late Model Vehicle Dynamics - Part 3
Welcome to the third installment of our series on dirt late model vehicle dynamics. In part two, we looked at some of the basic concepts of engineering analysis to use as a procedural baseline. We also discussed the eight unknown values of force and displacement that we will be solving for. With this understanding, we can now begin to look at the equations that will make up the mathematical model defining our race car.
As discussed in part two, the first subgroup of equations that we will be developing are the load transfer equations. The equations in this group define how the chassis will react to external inputs of force, aerodynamics, etc. applied to the vehicle.
It is worth taking a few moments here to discuss the concept of a resultant vector. A race car is made up of a large number of independent masses that are all physically connected. For example, the fuel cell can be thought of as an independent mass with its own center of mass, moment of inertia, etc. The engine is another mass with its own center of mass, moment of inertia, etc. The driver, the transmission, each of the four wheels are all independent masses that could have a force analysis performed on them independently. In fact, if you wanted to get real detailed, you could break the car down into every nut, bolt, washer, and rivet on the car having its own center of mass, moment of inertia, etc. Although in theory this is possible, and is actually how a CAD system would work, it isn’t a very realistic thing to do when developing our model. To simplify things, we can leverage the concept of a resultant vector, or in our case the resultant force vector. When doing a force analysis, it is common practice to combine all of the independent applied forces on a body, each with its own magnitude and direction, into one equivalent resultant force with its own magnitude and direction. When this resultant force is applied to the body, the body will react exactly the same as if each of the separate forces were applied independently. Similarly, we also define an equivalent center of mass. The equivalent center of mass is the center of mass of the body that is made up of all the independent masses (i.e. the fuel cell, the engine, the driver, the ballast, etc.). The resultant force vector is applied at the equivalent center of mass, which yields the same result that we would get if we evaluated each of the independent masses separately. What does this mean in the real world? Think about what happens when you move a piece of ballast, a battery, or a fuel cell around in the chassis. You are not changing the applied forces; you are simply moving the location of the center of mass. Keep this in the back of your mind as we develop our system of equations!
The resultant force vector applied to the center of mass is a combination of the gravitational force, the centrifugal force, and the acceleration and/or braking forces acting on the chassis. Note that all of these forces can be thought of as “external” forces acting on the chassis. To understand this concept, let’s think about the unrealistic scenario of a car driving in a circle on a flat surface with no body roll and no yaw angle relative to the tangential line of the circle’s curve, and with a moderate degree of forward acceleration. The gravitational force will act directly on the center of mass straight down, or in the negative “z” direction. This will be the “z” component of the resultant force vector. As the car drives around the curve of the circle, a centrifugal force is developed along the “y” axis of the car. Assuming that the car is turning left, this force will point towards the right of the car, or will be in the negative “y” direction. In response to this centrifugal force, there is a centripetal force developed, which is distributed among the four tire contact patches, that acts to hold the car on the path of the circle. This centripetal force is equal and opposite to the centrifugal force, assuming a constant radius curve. This is the case since we are traveling along a circular path. The centrifugal force will be the “y” component of the resultant vector. As the car accelerates forward, the rear tires are acting to push the car. At this point, we will assume the forward thrust is distributed equally between the left and right rear tires, and that there is no rear steer; however, neither would be the case in the real world. As the tires apply a force forward at the tire contact patches in the positive “x” direction, this force is reacted at the center of mass in the opposite direction, or in the negative “x” direction. This reaction force is the “x” component of the resultant force vector.
Under the scenario described above, we would have a resultant force vector going through the center of mass and pointing somewhere towards the right rear tire contact patch. In the real world, this would be a much more complicated scenario. There would be banking, the chassis would have body roll, and there would be some degree of chassis yaw angle from the tangential of the curve path. If these values are known, components of the resultant force vector can be modified accordingly. However, it is quite difficult to predict all of these inputs. How often do you know a tracks banking angle, how often do you know the radius of curvature, how do you know how much yaw the car has? All of these can be estimated to some degree, but there will inevitably be some amount of error involved. Obtaining acceleration data from a DAQ system is a much easier way of calculating the component forces. Simply mount the DAQ system in line with the chassis coordinate system, and the three axis accelerometer will give you all the accelerations at any given point around the track. Forces can then be calculated using the mass of the vehicle. Note that smoothing of the DAQ data will generally be required to get acceleration data. Throughout this series, we will assume that DAQ data is available and we will not worry about estimating these forces using track geometry and velocity.
For our system, let’s assume that the DAQ system is telling us that at the point in question, the chassis is experiencing the following:
Let’s also assume that our car weighs 2350 lbs.
The term “g” is telling us that the acceleration is some multiple of one unit of gravity. One “g” is equal to approximately 32.2 ft/sec2, so 1.1g is 1.1 times the acceleration due to gravity, 0.8g is 0.8 times the acceleration due to gravity, so on and so forth. To find the applied force, we can simply use Newton’s equation of F=ma. We are using our “g” value as a scaling factor for the acceleration. We can also use it for a scaling factor for the force since force is equal to the mass (or a constant) times the acceleration. There is a linear relation between the force and the acceleration. For our example, the vertical force exerted on the chassis will be -1.1g x |2350 lbs.|, or -2585 lbs. You may be asking yourself where this extra 235 lbs. came from, and why is the weight negative. The extra force is more than likely induced force from the bank angle of the track. There will also be additional forces due to aerodynamic effects; however, the accelerometer will not detect aerodynamic forces. Aerodynamic forces, in general, can be thought of as an additional external force acting at the center of pressure. The force is negative because it is pointing in the negative “z” direction. Remember that a force is a vector, so it has magnitude and direction. We use the magnitude of the weight as an input to the equation because we are only interested in the quantity of the weight at this point. The acceleration vector gives the direction, and the output of the equation is the resulting effective weight, with direction.
This leads us to our first equation. For equation number one, we are going to sum the forces in the “z” direction. We will be solving the system of equations using a quazi-static approach, so we will set the equations in our system equal to zero. Equation one looks like this:
This equation is adding all the forces in the “z” direction. This is the applied force of the vehicles weight multiplied times the g-factor plus the load induced by aerodynamic effects plus the reaction forces at each of the four contact patches (Pn). Note that if the weight of the car and the aerodynamic forces are pointing down, or are negative, the sum of the four contact patches must be positive to satisfy the equation and make it equal to zero. Since our tires are not rails, although some of us like to think we are running on rails, they cannot create a negative force holding the car down (unless you are using some really good tire dope); therefore, the “Pn” must always be positive or zero. This means that the sum of the tire forces will always support the weight of the car plus the induced aerodynamic forces. This, of course, assumes we are generating aerodynamic down-force and not aerodynamic lift greater than the effective weight of the car sending the car into flight. The figure below shows a free body diagram for equation number one. Notice that the force vectors are represented by arrows, and that the arrows are all pointing in the positive direction. Drawing the vectors in the positive direction is standard practice when developing free body diagrams. This helps to keep the signs straight when writing the equations. We let the vector go positive or negative in the equation as needed depending on the inputs to the equation. A negative quantity indicates that the vector direction is opposite the arrow shown, so in our case the weight will have a negative value meaning that the force will be in the down direction, opposite the drawn arrow.
For the second and third equations in our system of equations, we will be summing the moments about the x-axis and the y-axis respectively. In order to do this, we will utilize the cross product to calculate the moments. The cross product is a 3D vector mathematics tool used to calculate the moments about a point around the x, the y, and the z axes at the same time. We will not spend a lot of time discussing the tool as it can be found in any appropriate math or engineering text. The cross product is:
The value “Mo“ is the moment and is a vector. The “r” is the positioning vector. Since we are taking moments about the axes, we will find the three moments about the origin point, so in our case the tail of the positioning vector will be the origin, and the head will be the point of force application. The “F” is the force vector that is applied at the given point. For simplicity, we will find the moment about the origin from each applied force vector, then we will sum the respective component from each of the cross product results. A free body diagram is shown in the picture below for the external forces applied at the center of mass, and the reaction forces applied at the left rear tire contact patch. Only the two forces are shown to keep the picture uncluttered; however, in practice you would do this for all of the forces in the system that the chassis will see. The following is a list of forces that would most likely be encountered in a race car application. At this point, we are only looking at forces that act on the vehicle as a whole, not internal forces such as springs, shocks, etc. You can imagine drawing a boundary box around the car and looking at things that would act on the exterior of this box.
• Applied force at the center of mass
• Reaction force at the left front tire contact patch
• Reaction force at the right front tire contact patch
• Reaction force at the left rear tire contact patch
• Reaction force at the right rear tire contact patch
• Applied vertical aerodynamic force at top center of pressure
• Applied lateral aerodynamic force at side center of pressure
• Applied longitudinal aerodynamic force at front center of pressure
For our second equation, we are going to sum the moments about the x-axis. There is one more step we need to do before we write our equation. We need to observe the fact that at each of the four tire contact patches, we only know two of the three components. We know the lateral reaction force and the longitudinal braking/acceleration force, but we know nothing about the vertical component, so we will need to break it out from the cross product calculation. We will still perform the cross product at the four contact patches, but we will only use the “x” and “y” components and leave the “z” component equal to zero. We will account for the “z” component separately. Let’s build equation two in multiple steps. First, we will sum all of the known moments about the x-axis from our cross product calculation results. We will call this sum “MXa”.
Next, we will use this result to write equation two. Equation two looks like this.
In this equation, “ti” is the half-track position from the X-Z plane to each of the four tire contact patches. As you can see in the equation, we are starting to add some definition to our mathematical model that is telling us how lateral load is transferred from one side of the car to the other in response to some force inputs. We can start to see how much load is transferred laterally, and how this load transfer is distributed between the front and rear wheels of the car. Remember, however, that this is only load transfer as a result of a moment about the x-axis, or the longitudinal axis, of the car.
For our third equation, we will replicate what we did for equation two, but we will do it about the y-axis. We will sum all of the known moments about the y-axis from our cross product calculations, and we will call this sum “Mya”.
We will use this result to write equation three. Equation three looks like this.
In equation three, “li” is the half-wheelbase position from the Y-Z plane to each of the four tire contact patches. Noticed that we are using a negative sign as opposed to the positive sign that we used in equation two. This is to stay consistent with the form of the cross product to keep the moments going in the right direction. With this equation, similar to equation two, we can now start to see how much load is transferred longitudinally, and how this load transfer is distributed between the left and the right wheels.
We now have the first three equations in our system of equations.
What exactly can we learn from looking at the equations? How have you thought about weight transfer in the past? What things did you think impacted weight transfer? Was it influenced by springs or shocks? Do we see springs or shocks in our system of equations? No, we don’t. As we can see from the equations, springs and shocks have nothing to do with the amount of weight transfer. In fact, the only factors that impact weight transfer are the externally applied forces, static weight of the car, the center of mass location, the half-wheel bases, and the half-wheel tracks (widths). With this in mind, start thinking about how different adjustments made to a race car impact these factors. What does moving the rear end to the left or right do? What does moving ballast do? What does moving the right front out an inch do? What happens to weight transfer when we induce rear steer into the car?
In part 4 of this series, we will start to look at the equations in our model that govern how this load transfer is distributed throughout the elements (i.e. shock, springs, links, etc) of our race car. We will also look at how we can use this information to predict the suspension travel.
As always, if you find this blog helpful, then you can help me in return by thinking of Bartlett Motorsport Engineering next time you need to buy parts for your race car. |
There are a range of Math courses offered at Phoenix; Math 7, Accelerated Math 7, Math 8, Accelerated Math 8, HS Math 1, and HS Math 2. Student placement is based upon multiple criteria including State/Local Test scores from 4th, 5th, and 6th grades, 6th-grade teacher recommendations and evaluations, and the successful completion of prerequisite courses needed for placement in the Accelerated classes. The accelerated courses (7/8) take 3 years of math (7, 8, and HS Math 1) and condense that material into just 2 years of classroom experience. These classes, just as they are named, move through the material at a rigorous pace and a high level of student dedication is required for success.
Mathematics | Grade 7
Ratios and Proportional Relationships
• Analyze proportional relationships and use them to solve real-world and mathematical problems.
The Number System
• Apply and extend previous understandings of operations with fractions to add, subtract, multiply, and divide rational numbers.
Expressions and Equations
• Use properties of operations to generate equivalent expressions.
• Solve real-life and mathematical problems using numerical and algebraic expressions and equations.
• Draw, construct and describe geometrical figures and describe the relationships between them.
• Solve real-life and mathematical problems involving angle measure, area, surface area, and volume.
Statistics and Probability
• Use random sampling to draw inferences about a population.
• Draw informal comparative inferences about two populations.
• Investigate chance processes and develop, use, and evaluate probability models.
In Grade 7, instructional time will focus on four critical areas:
(1) Students extend their understanding of ratios and develop an understanding of proportionality to solve single- and multi-step problems.
Students use their understanding of ratios and proportionality to solve a wide variety of percent problems, including those involving discounts, interest, taxes, tips, and percent increase or decrease. Students solve problems about scale drawings by relating corresponding lengths between the objects or by using the fact that relationships of lengths within an object are preserved in similar objects. Students graph proportional relationships and understand the unit rate informally as a measure of the steepness of the related line, called the slope. They distinguish proportional relationships from other relationships.
(2) Students develop a unified understanding of numbers, recognizing fractions, decimals (that have a finite or a repeating decimal representation), and percents as different representations of rational numbers.
Students extend addition, subtraction, multiplication, and division to all rational numbers, maintaining the properties of operations and the relationships between addition and subtraction, and multiplication and division. By applying these properties, and by viewing negative numbers in terms of everyday contexts (e.g., amounts owed or temperatures below
zero), students explain and interpret the rules for adding, subtracting, multiplying and dividing with negative numbers. They use the arithmetic of rational numbers as they formulate expressions and equations in one variable and use these equations to solve problems.
(3) Students continue their work with area from Grade 6, solving problems involving the area and circumference of a circle and the surface area of three-dimensional objects.
In preparation for work on congruence and similarity in Grade 8, they reason about relationships among two-dimensional figures using scale drawings and informal geometric constructions, and they gain familiarity with the relationships between angles formed by intersecting lines. Students work with three-dimensional figures, relating them to two-dimensional figures by examining cross-sections. They solve real-world and mathematical problems involving area, surface area, and volume of two- and three-dimensional objects composed of triangles, quadrilaterals, polygons, cubes and right prisms.
(4) Students build on their previous work with single data distributions to compare two data distributions and address questions about differences between populations.
They begin informal work with random sampling to generate data sets and learn about the importance of representative samples for drawing inferences.
Mathematics | Grade 8
The Number System
• Know that there are numbers that are not rational, and approximate them by rational numbers.
Expressions and Equations
• Work with radicals and integer exponents.
• Understand the connections between proportional relationships, lines, and linear equations.
• Analyze and solve linear equations and pairs of simultaneous linear equations.
• Define, evaluate, and compare functions.
• Use functions to model relationships between quantities.
• Understand congruence and similarity using physical models, transparencies, or geometry software.
• Understand and apply the Pythagorean theorem.
• Solve real-world and mathematical problems involving the volume of cylinders, cones, and spheres.
Statistics and Probability
• Investigate patterns of association in bivariate data.
In Grade 8, instructional time will focus on three critical areas:
(1) Students use linear equations and systems of linear equations to represent, analyze, and solve a variety of problems.
Students recognize equations for proportions (y/x = m or y = mx) as special linear equations (y = mx + b), understanding that the constant of proportionality (m) is the slope, and the graphs are lines through the origin. They understand that the slope (m) of a line is a constant rate of change, so if the input or x-coordinate changes by an amount A, the output or y-coordinate changes by the amount m•A. Students also use a linear equation to describe the association between two quantities in bivariate data (such as arm span vs. height for students in a classroom). At this grade, fitting the model, and assessing its fit to the data are done informally. Interpreting the model in the context of the data requires students to express a relationship between the two quantities in question and to interpret components of the relationship (such as slope and y-intercept) in terms of the situation. Students strategically choose and efficiently implement procedures to solve linear equations in one variable, understanding that when they use the properties of equality and the concept of logical equivalence, they maintain the solutions of the original equation. Students solve systems of two linear equations in two variables and relate the systems to pairs of lines in the plane; these intersect, are parallel, or are the same line. Students use linear equations, systems of linear equations, linear functions, and their understanding of the slope of a line to analyze situations and solve problems.
(2) Students grasp the concept of a function as a rule that assigns to each input exactly one output.
They understand that functions describe situations where one quantity determines another. They can translate among representations and partial representations of functions (noting that tabular and graphical representations may be partial representations), and they describe how aspects of the function are reflected in the different representations.
(3) Students use ideas about distance and angles, how they behave under translations, rotations, reflections, and dilations, and ideas about congruence and similarity to describe and analyze two-dimensional figures and solve problems.
Students show that the sum of the angles in a triangle is the angle formed by a straight line and that various configurations of lines give rise to similar triangles because of the angles created when a transversal cuts parallel lines. Students understand the statement of the Pythagorean Theorem and its converse and can explain why the Pythagorean Theorem holds, for example, by decomposing a square in two different ways. They apply the Pythagorean Theorem to find distances between points on the coordinate plane, to find lengths, and to analyze polygons. Students complete their work on volume by solving problems involving cones, cylinders, and spheres.
Accelerated Math Offerings
There is an option to take the Accelerated Math 7 class when you are entering Phoenix Middle School. This decision will be based on the suggestion of the 6th-grade math teacher, test scores, student desire, and teacher discretion. If a student completes the Accelerated math 7 class, then they will automatically enter into Accelerated Math 8 during the 8th Grade year. After completing BOTH years of accelerated courses, the student will have earned credit for the Math 7 class, Math 8 class, and the High School MATH 1 class. Upon entering High School, the student will go directly into the Math 2 class. Students must Master both Accelerated courses in order to earn High School Credit.
HS Math 1 and HS Math 2 are offered at Phoenix in an effort to keep our students in the building all day. Placement in these classes is in sequence after successful completion of the Math 8 course either in Elementary school or during the 7th-grade year. These courses are directly aligned to the graded course of study / CCSS for the Math 1/2 classes taught at the high school and successful completion of these courses provides the student with the HS credit for the class, but the grade in this class DOES NOT apply to the student’s eventual HS G.P.A.
The Social Studies curriculum is designed for students to accomplish multiple objectives. Students will learn major concepts and themes from the following curriculum areas including History, Geography, Economics, and Government. Throughout the year, they will continue to develop and implement thinking, writing, reading, and listening skills within a social studies context. Through the social studies curriculum, students will continue to develop and apply skills in information gathering, organization, discussion, and presentation. In addition, students will be guided in the understanding of concepts and how to apply them to real-life situations. This content area helps students learn about the origination of values and how attitudes and values influence our actions and the actions of others. Students learn how to integrate concepts and factual information through inductive and deductive reasoning. Analyzing, synthesizing, and problem-solving are major objectives of the social studies program.
7th Grade World Geography/Ancient History
Students in the seventh grade Social Studies explore world events occurring between 1000 BCE and AD 1750. We will be examing the enduring impact of the early civilizations of Central & South America, West Africa, Greece, and Rome. The class will be analyzing the effects of geography, economics, religion, and governmental structures on human interaction. There are also many skills and methods social scientists use that will be introduced and practiced throughout the year.
Ancient World History continues our students' study of ancient world history and continues up through the early European exploration of North America. Comparative methods are used using contemporary events as a way for students to determine the meaning of our subject.
8th Grade American History
8th-grade social studies covers American History and the United States Government from colonization in the late 16th century to Reconstruction after the Civil War. This is the first sequence of American History that will be completed in the 10th-grade year with Reconstruction to the Present. The course of study contains the topics of colonization, independence, forming a new government, new challenges, expansion, industrialization, and the cause and effects of the Civil War.
Besides historical content, we will focus on the common core, citizenship, cause and effect relationships, opinion and fact, interpretation of resources, problem-solving, presentation skills, quality of work, and analyzing skills. Although content builds on past historical knowledge, our focus will be to look deeper at the origins of our country today. |
It refers to the arrangement or layout of electrons in the shells or orbits around the nucleus. The outermost shell or orbit around the nucleus is called valence shell and the electrons present in it are called valence electrons.
Rules for electronic configuration or distribution of electrons
There are four energy shells or orbits around the nucleus that are named K, L, M, and N or 1, 2, 3, and 4 starting from the innermost orbit to the outermost orbit. There are three rules for the distribution of electrons in the shells around the nucleus.
First rule: The maximum number of electrons in an orbit or energy shell is equal to 2n2 where n = number of shell. For example, for the first orbit (K), n = 1. So, the first shell (energy level or orbit) which is indicated by 'K' can have a maximum of 2 electrons as 2n2 = 2, where n =1. Similarly, the second shell which is indicated by 'L' can have a maximum of 8 electrons (2n2 = 8), where n =2. Similarly, the third shell which is indicated by 'M' can have a maximum of 18 electrons (2n2 = 18), where n =3. Now, the fourth shell which is indicated by 'N' can have a maximum of 32 electrons as 2n2 = 32 and n =4 in this case.
Second rule: Shells or orbits are filled one by one in a step-wise manner which means the outer shell cannot be filled until the inner shell is not filled completely or have the maximum number of electrons it can have. For example, only when the 'k' shell has 2 electrons, the next shell 'L' can have 8 electrons.
Third rule: The outermost orbit cannot have more than 8 electrons.
Let us take an example to understand the rules for electronic distribution;
The atomic number of Calcium is 20. So, it has 20 protons, 20 neutrons in the nucleus and 20 electrons around the nucleus. As per the above electronic distribution rules, its first shell will have 2 electrons then we are left with 18 electrons. Now, the second shell can have 8 electrons, or after giving 8 electrons to the second shell, 10 electrons are left. Now, the third shell can accommodate a maximum of 18 electrons.
Now, it seems we can put the remaining 10 electrons in the third shell. But, we cannot do this instead we will put 8 electrons in the third shell and the remaining 2 electrons in the fourth shell. The reason for this is a rule that says that there can be a maximum of eight electrons in the outermost shell. In this case, if we allot all of 10 electrons to the third shell then there will be no fourth orbit. It will make the third orbit the outermost orbit with 10 electrons, which will contradict the third rule that says that the outermost orbit can have a maximum of 8 electrons or cannot have 10 electrons. So, as per the third rule, there would be 8 electrons in the third orbit and the remaining 2 electrons will be accommodated in the fourth orbit.
Why there are 8 electrons in the third shell of Calcium? Why cannot be the electron arrangement like 2,8,6,4 in this case?
The other rule says that unless we fill the inner shells completely we cannot fill the outer shells. So, only when the third orbit gets 8 electrons we can start filling the fourth orbit.
What is a subshell?
The shell or orbit around the nucleus contains subshells. A subshell is a pathway that is followed by electrons while moving in a shell. It can be of four types, which are s, p, d and f. The first shell (n=1) consists of only one subshell, which is's' and the second shell (n=2) consists of two subshells, which are's' and 'p' and the third shell (n=3) consists of three subshells, 's' 'p' 'd' and the fourth ( n=4) consists of four subshells that include 's' 'p' 'd' and 'f'. Furthermore, each subshell contains one or more orbitals, for example, s contains 1 orbital, p has 2 orbitals, d has 3 orbitals and f has 7 orbitals. Shell is also known as Orbit. So, the orbit is different from the orbital.
We can say that shell is a collection of subshells or made of subshells that are named s, p, d, and f. The principal quantum number (n) of all subshells remains the same in a shell. For example, n = 1 for the first shell, so, all the subshells present in it will have n = 1.
Electrons revolve around the nucleus in different shells. Each shell contains subshells with the same principal quantum number (n). For example, the 3s, 3p and 3d subshells belong to the same shell and have the same principal quantum number (n) 3. A large value of n means the shell is far away from the nucleus. Whereas, the small value of n shows the shell is closer to the centre or nucleus. So, electrons in the same shell have the same value of n and are located at the same distance from the nucleus.
Just like a shell is a collection of subshells, a subshell is a collection of or consists of orbitals with the same principal quantum number 'n' and angular momentum quantum number 'l'. The subshells are denoted by letters s, p, d, f, g, h and so on. The angular momentum quantum number ('l') for s subshell is equal to 0; for p subshell 'l' = 1; for the'd' subshell 'l' =2.
The orbitals' shape is different in different subshells. The electrons of a subshell, which have the same 'l' revolve around the nucleus in nearly the same shape.
Each orbital holds two electrons. The electrons of the same orbital have the same principal quantum number, angular momentum quantum number, and magnetic quantum number 'm', it is the 'm' that differentiates different orbitals of a subshell.
The orbitals of the same subshell have the same 'l' and 'n' and subshell with the same 'n' value are part of the same shell.
From the above table, it is easy to understand that the first shell has one 1s subshell, and the second shell consists of 2s and 2p subshells. So, subshells are subdivisions of shells and are further divided into or consist of orbitals.
Difference between Shell, Subshell and Orbital
How to write the electronic configuration / Aufbau Principle
Aufbau principle refers to the manner in which electrons should be filled in orbits or valence shells of an atom when it is in the ground state. As per this principle, the electrons are filled into orbitals in the increasing order of orbitals' energy. So, according to the Aufbau principle, the orbits with low energy should be filled first before filling the orbits with high energy. See the image given below, the arrow is showing the order of filling electrons in subshells in the order of increasing energy.
Aufbau is a German word whose meaning is 'construct' or 'build-up'. This principle also helps understand the location of electrons and their corresponding energy shells. For example, carbon has 6 electrons so its electronic configuration is 1s2 2s2 2p2.
As per the Pauli Exclusion Principle, only two electrons can occupy the same orbital and they must have opposite or antiparallel spins. Besides this, as per Hund's rule, the orbitals should be singly occupied first before pairing the electrons.
Salient features of the Aufbau Principle
Electronic Configuration according to the Aufbau Principle;
Sulphur: Its atomic number is 16, so it has 16 electrons. Now, according to the Aufbau principle, two electrons will occupy the 1s subshell, eight will be present in the 2s and 2p subshell, and the remaining electrons will occupy the 3s and 3p subshells. For example, 1s2 2s2 2p6 3s2 3p4.
Nitrogen: It has 7 electrons as its atomic number is 7. Its 7 electrons will occupy 1s, 2s, and 2p orbitals as according to the Aufbau principle, the electrons are filled into 1s, 2s, and 2p orbitals. So, its electronic configuration is written as 1s2 2s2 2p3.
The electronic configuration of chromium is (Ar) 3d5 4s1 and not (Ar) 3d4 4s2 as it should be according to the Aufbau principle. This exception is connected with various reasons, which include the increase in stability offered by half-filled subshells (all 5 orbitals in d subshell get 1 electron) and the low difference in energy of the 3d and the 4s subshells.
The repulsion between electrons in half-filled subshells is low that tends to increase the stability. Similarly, completely filled subshells also increase the stability of the atom. So, the electronic configuration of some atoms does not obey the Aufbau principle due to the different energy gap between the orbitals.
Copper is also an exception to this principle as its electronic configuration is (Ar) 3d10 4s1. This can be due to the stability provided by a completely filled 3d subshell.
Next TopicWhat is valency |
I think this is an excellent and quite fascinating account of serious and genuinely useful application of some of Mathematica's vast capabilities, which made me feel that the kind of stuff that the rest of us do is more akin to playing. To my mind, it also should make very clear (to anyone who had any doubts) the complete irrelevance of most of the "criticism" of Mathematica that Richard Fateman has been posting for over 2 decades (not that I expect him to stop doing it or even modify it in anyway).
I don't think the points John's makes needs any further support so I want to address just one issue: Richard's use of the words "erroneous, incorrect, buggy" etc. This has, in fact, already been addressed by John in the first sentence of his reply, but so briefly that I think there is still some room for additional comment. Almost always when RJF uses such words in connection with Mathemtica he is playing a kind of game, well familiar to the veterans of this forum but which could confuse newcomers. The idea is to make ambiguous statements that can be interpreted in at least two ways: one could be called "strong" and the other "weak". The "strong" interpretation is the one you hope will influence people not very familiar with the topic (or with this sort of rhetorical tricks), the weak one is what you turn to when pressed by people who demand precise justification of your claims. When this happens it turns out that the alleged "errors" are not at all what people normally call errors but merely aspect of design that Richard does not like (assuming, of course, that he does not have another hidden motive that goes beyond disliking the working of Mathematica). Thus, Richard whenever he sees a chance to influence someone who he thinks is new to Mathematica, suggest that Mathematica's significance arithmetic is prone to give wrong answers and is unreliable. But when pressed, he has admitted on a number of occasions that:
1. Significance arithmetic, being a (faster) version of interval arithmetic, can be useful when used by people who understand it and that it is used quite reliably by many built-in Mathematica functions such as NSolve (and also Solve). 2. It is easy to switch to fixed precision arithmetic whenever one wants with a simple usage of Block.
In other words: significance arithmetic is a feature that Mathematica offers in addition to standard, fixed precision arithmetic. How can more be worse than less? Of course JRF when pressed, has admitted all this (I can post references but I don't think he will deny it) and his objections actually amount to just two things: the choice of significance arithmetic as the deafult for extended precision computations and the fact that Mathematica treats what are actually intervals as "numbers" instead of just calling them something else ("fuzz balls"). Every time there is a serious discussion of these things, he is forced to concede all these points but then, at the next opportunity, he starts the whole things all over again. In the end, he becomes so careless that he ends up writing comical nonsense like:
> I will say that for scientific computing it is probably a bad feature > to have finite numbers x such that x ==0 and x+1 == x.
Excuse me? I have always assumed that every number system has at least one finite number x such that x+1=x, this follows from the group axiom. Also, by the way, if we are talking about group addition then "x ==0 and x+1 == x" is a not very economical way to express x==0.
More seriously, there are two things are being insinuated here, both of them false. The first is that having "numbers" with unusual properties is somehow "wrong" or even "eccentric" in mathematics. This is, of course, completely false and one can give lots of examples. To take just one: in non standard analysis one has infinitely many "infinitely small numbers" x such that n x <1 for every positive integer n. This is both logically sound and very useful in practical proofs and computations. Exactly the same is true of "fuzz balls".
Another insinuation, equally false, is that an unsuspecting "naive" user of Mathematica could fall into some trap because of this. Naive users of Mathematica practically never use arbitrary precision arithmetic. Users who need it will almost always look into the documentation and realise that they won't be dealing with numbers in the ordinary sense but intervals of variable size. This is why examples of the kind that RJF loves posting never come up naturally. The only time they occur is either then RJF posts them or sometimes when someone discovers low precision numbers and tries things like:
1`0 == 0
1`0 == 1
But, of course, to be able to enter numbers with no precision you really need to know, well, what it means for a number to have Precision 0. At this point, I suspect, RJF will be tempted to come up with his favourite example of the the kind of thing that awaits an unsuspecting user who for some reason needs to evaluate:
z = 1.11111111111111111111;While[(z = 2*z - z) != 0, Print[z]]
Well, see it for yourself (in Mathematica 9) and decide if anyone would find it so confusing.
On 30 May 2013, at 12:15, John Doty <email@example.com> wrote:
> Changing the topic here. > > On Tuesday, May 28, 2013 1:49:00 AM UTC-6, Richard Fateman wrote: > >> Learning Mathematica (only) exposes a student to a singularly erroneous >> model of computation, > > A personal, subjective judgement. However, I would agree that exposing the student to *any* single model of computation, to the exclusion of others, is destructive. > >> Nice that you concede it is eccentric. > > Concede? I praise its eccentricity! It takes me places other tools cannot easily go. > >> Productive perhaps if you do not >> encounter a quirk. > > There is no nontrivial quirk-free software. > >> Especially a hidden quirk that gives the wrong >> answer but no warning. > > Mathematica applied to real problems is pretty good here. > >> And if you are not in a hurry for numerical >> results. > > OK, let's consider how I use Mathematica in mixed-signal chip design. For my video chain chips, I start with two different approches in Mathematica. I have a nonlinear simulation environment that breaks the chip down into blocks, represented by functions, and composes a function representing the effect of a single clock step on the chip state from these functions. I iterate that function to "simulate". Some of the blocks are initially represented as Z transforms, so Mathematica's algebraic capabilities come in handy. The ease of partial evaluation helps with optimization: by evaluating as much of the function as possible before iterating it, special cases like consta nt inputs become very simple and fast. > > I also have a Mathematica model that starts as symbolic Z and Fourier transforms, and models the chip as linear operators. For numerical results, these operators turn into matrices. > > Once I've established the design parameters, I reduce the functional blocks to circuits and simulate with that fine Berkeley product, SPICE, the liguafranca of circuit simulation. You think Mathematica has problems? You've never used SPICE. > > Linguistically, SPICE is an absolute mess. "Grammar" is utterly ad-hoc, and variable with dialect. "1N914B" is a perfectly good model name for a diode, but "2N2222A" cannot be used to name a transistor model in some dialects. There are a couple of dozen different dialects around, most containing incompatible proprietary extensions. > > For quirky unpredictability, SPICE is much, much worse than Mathematica. SPICE users get used to seeing spurious "trap oscillations", large amounts of energy appearing out of nowhere, and ridiculous time steps. Tuning the numerics to get sane results is an arcane art. As a programming language, SPICE is about as sophisticated as BASIC, but much less regular. Nevertheless, SPICE is very widely used and very productive. In the end, people find that it gets the job done (although many get rather exasperated in the process!. > > But the big problem with SPICE isn't that it's quirky and error-prone, but that it buries the problem in (mostly irrelevant) detail. I'm simulating thousands of transistors, each with ~100 model parameters, with state changing on picosecond time scales. My SPICE chip simulations tend to run at a billion times slower than real time. Data output is voluminous and difficult to analyze (sometimes I read it into Mathematica for reduction). These problems limit the range of questions I can practically ask of SPICE. > > On the other hand, the iterated function approach in Mathematica is 10,000-100,000 times faster than the SPICE approach, so it's a lot better for probing high-level behavior. The linear algebra approach in Mathematica is even faster: bang a few matrices together and it tells me approximately what I'd get from averaging an unlimited number of simulation runs. > > Because SPICE is so slow and detailed it's very difficult to use as a design tool, where the question is "I want this behavior, how do I get it?" It'sbetter for design verification: "I have this circuit, how will it behave?". The linear approach in Mathematica is the quickest for design optimization, but of course it can't probe nonlinear phenomena in "large signal" cases. The iterated function approach in Mathematica is a good compromise to bridge the gap. > > So, when I have two Mathematica models and a SPICE model that agree that the design will meet requirements, and agree with each other where their capabilities overlap, I release the design to the layout contractor. One result is another SPICE model, extracted from the silicon geometry, for verification. This model is extremely complex, with tens of thousands of "parasitic" resistors and capacitors, but I'll run a few test cases through it before releasing the design to the mask aggregator. So, before any silicon is patterned, I get four fairly independent looks at the design from different viewpoints through two radically different tools. > > There are still many things that can go wrong before I have an actual chip powered up and processing video. Some of them involve other software, but they have nothing to do with Mathematica, so I won't go into them here. > > The real world simply doesn't work the way you imagine in this case. The design process involves cross checks between multiple tools and models, so few design errors escape detection. Furthermore, I haven't seen any errors attributable to the "quirks" of Mathematica in this process. I, of course, make and (hopefully) correct errors at a rapid pace. SPICE is rather treacherous. But correct and accurate calculation in Mathematica just isn't a problem in this engineering flow. > |
PSY201: Chapter 5: The Normal Curve and Standard Scores
– Normal curve
+ a very important distribution in behavior sciences
+ three principal reasons why...
- 1. many of the variables measured in behavioral science research have distributions that quite closely
approximate the normal curve (ie: height, weight, intelligence and achievement are few examples)
- 2. many of the inference tests used in analyzing experiments have sampling distributions that become normally
distributed with increasing sample size. (ie: sign test & Mann-Whitney U test)
- 3. many inference tests require sampling distributions that are normally distributed. The z test, Student's t test,
and the F test are examples of inference tests that depend on this point → much of importance of normal curve
occurs in conjunction with inferential statistics.
The Normal Curve:
– normal curve is a theoretical distribution of population scores.
+ a theoretical curve and is only approximated by real data
+ bell-shaped curve that is described by equation:
– curve has two inflection points, one on each side of the mean
+ inflection points are located where the curvature changes direction
+ ie: inflection points are located where curve changes from being convex downward to being convex upward
- if the bell-shaped curve is a normal curve, inflection points are at 1 standard deviation from the mean (
- as the curve approaches the horizontal axis, it is slowly changing its Y value.
- the curve never quite reaches the axis
- it approaches the horizontal axis and gets closer and closer to it, but it never quite touches it.
- curve is asymptotic to the horizontal axis
– infection points under the curve, horizontal... in the diagram on page 97
Area Contained Under the Normal Curve:
– in distributions that are normally shaped, there is a special relationship between the mean and the standard
deviation with regard to the area contained under the curve
– when a set of scores is normally distributed, 34.13% of the area under the curve is contained between the mean
(u) and a score that is equal to u + 1o ; 13.59% of the area is contained between a score equal to + 1 and a
score of u+ 2 o; 2.15%of the area is contained between scores of u+ 2o and u + 3o ; and 0.13% of the area
exists beyond u+ 3o . This accounts for 50% of the area
+ since curve is symmetrical, same percentages hold for scores below the mean
+ since frequency is plotted on vertical axis, these percentages represent the percentage of scores contained
within the area – ie:
+ have a population of 10,000 IQ scores
+ distribution normally shaped with u = 100 and o = 16
+ since scores are normally distributed, 34.13% of scores are contained between scores of 100 and 116 ( u+ 1o
= 100 + 16 = 116), 13.59% between 116 and 132 ( u+ 2o = 100+32 = 132), 2.15% between 132 and 148, and
0.13% above 148
+ similarly, 34.13% of scores fall between 84 and 100, 13.59% between 68 and 84, 2.15% between 52 and 68,
and 0.13% below 52.
– to calculate the number of scores in each area, multiply the relevant percentage by the total number of scores →
there are 34.13% x 10,000 = 3413 scores between 100 and 116, 13.59% x 10.000 = 1359 scores between 116
and 132, and 215 scores between 132 and 148; 13 scores are greater than 148.
+ for other half of distribution, there are 3413 scores between 84 and 100, 1359 scores between 68 and 84, and
215 scores between 52 and 68; there are 13 scores below 52.
+ these frequencies would be true only if distribution is exactly normally distributed
+ in actual practice, the frequencies would vary slightly depending on how close the distribution is to this
Standard Scores (z Scores):
– IQ of 132...
+ a score is meaningless unless you have a reference group to compare against
+ without one, can't tell whether the score is high, average, or low
– score is one of the 10,000 scores of distributions → gives IQ of 132 some meaning
+ ie: can determine the percentage of scores in distribution that are lower than 132 → determining the percentile
rank of score of 132 (percentile rank of a score is defined as the percentage of scores that is below the score in
– 132 is 2 standard deviations above the mean
+ in normal curve, there are 34.13 + 13.59 = 47.72% of the scores between the mean and a score that is 2
standard deviations above the mean
+ to fine percentile rank of 132, need to add this percentage the 50.00% that lie below the mean → 97.72%
(47.72 + 50.00) of the scores fall below your IQ score of 132.
+ should be happy to be intelligent
– to solve problem, had to determine how many standard deviations the raw score of 132 was above or below the
+ transformed the raw score into a standard score, also called a z score
– a z score is a transformed score that designated how many standard deviation units the corresponding raw score
is above or below the mean – process which by the raw score is altered – score transformation
+ z transformation results in a distribution having a mean of 0 and a standard deviation of 1
+ reason z scores are called standard deviation is they are expressed relative to a distribution mean of 0 and a
standard deviation of 1
– in conjunction with a normal curve, z scores allow to determine the number or percentages of scores that fall
above or below any score in the distribution
+ z scor |
The velocity of an object moving rectilinearly is given as a function of time by v = 4t – 3t 2 , where v is in m/s and t is in seconds. The average velocity of particle between t = 0 to t = 2 seconds is
Two balls are dropped from the top of a high tower with a time interval of t 0 second, where t 0 is smaller than the time taken by the first ball to reach the floor, which is perfectly inelastic. The distance S between the two balls, plotted against the time lapse / from the instant of dropping the second ball, is best represented by
A body is dropped from the top of the tower covers a distance 7h in the last second if its journey, where ‘h’ is the distance covered in the first second. How much time does it take to reach the ground?
The speed of a motor launch with respect to the water is v = 5 ms -1 , the speed of stream u= 3 ms -1 . When the launch began travelled 3.6 km up stream, turned about and caught up with the float. How long is it before the launch reaches the float again? (Find answer in hour).
On a highway, two buses A and B are running at the same velocity of magnitude 30 ms -1 . The brakes caused a deceleration of 30 7 ms − 2 in bus A and that of bus B is 3 ms -2 . In an emergency when driver of the front car applies brakes, immediately its rear light turns red and braking begins. In response, driver of the rear bus also applies brakes to avoid a collision with the front bus. Every driver takes 1 s to apply the brakes after he saw a need for it. If bus A ahead of bus B, then the minimum separation between the buses before driver of bus A applies the brake is x. If bus B is running ahead of bus A, then the minimum separation between the buses before the driver of bus B applies brake is x 2 . The value of x 1 3 x 2 is
Balls A and B are released from rest from roof of a building at t=0 and t=2 s, respectively. The ball A strikes with ground and comes back with same speed. After some time, both balls A and B meet with each other at a height of 55 m from the ground. If the height of building is 60n meter, then the value of n is
A stone is thrown vertically upward. When the stone is at point A, its distance from a certain point O is 6 5 (OA=6 5 ) and the component of velocity along OA is nonzero. When it is at point B(OB= 10 m), the velocity at B is zero. When it is at point C(OC=6 m), the component of velocity of the particle along OC is zero. If the velocity of projection of the stone is v 0 = 5 n ms -1 , then the value of n is
A car accelerates from rest at a constant rate α for some time, after which it decelerates at a constant rate β and comes to rest. If the total time elapsed is t, then the maximum velocity acquired by the car is
A particle moves with uniform acceleration along a straight line AB. Its velocities at A and B are 2 m/s and 14 m/s, respectively. M is the mid-point of AB. The particle takes t 1 seconds to go from A to M and t 2 seconds to go from M to B. Then t 2 /t 1 is
A point moves with uniform acceleration and v 1 , v 2 , and v 3 denote the average velocities in the three successive intervals of time t 1 , t 2 , and t 3 Which of the following relations is correct?
A man swimming downstream overcome a float at a point M. After travelling distance D he turned back and passed the float at a distance of D/2 from the point M, then the ratio of speed of swimmer with respect to still water to the speed of the river will be
From a high tower, at time t = 0, one stone is dropped from rest and simultaneously another stone is projected vertically up with an initial velocity. The graph of distance S between the two stones plotted against time t will be
Each of the three graphs represents acceleration versus time for an object that already has a positive velocity at time t 1 . Which graphs show an object whose speed is increasing for the entire time interval between t 1 and t 2 ?
In travelling a distance of 3 kilometre between points A and D, a car is driven at 100 kmh -1 from A to B for t second and at 60 kmh -1 from C to D for t second. If the brakes are applied for 4 second between B and C to give the car a uniform deceleration, the value of t is
A particle moving in a straight line covers half the distance with Speed V o .The other half of the distance is covered in two equal time intervals with speed V 1 and V 2 respectively. The average speed of the particle during this motion is
A body travels for 15sec starting from rest with a constant acceleration . If it travels distances x, y and z in the first 5 sec,second 5 sec and the next 5 sec respectively. The relation between x, y and z is
From the top of the tower of height 400 m, a ball is dropped by a man, simultaneously from the base of the tower, another ball is thrown up with a velocity 50 m/s; at what distance will they meet from the base of the tower?
A student is standing at a distance of 50 metres from a bus. As soon as the bus begins its motion (starts moving away from student) with an acceleration of 1 ms -2 , the student starts running towards the bus with a uniform velocity u. Assuming the motion to be along a straight road. the minimum value of u, so that the student is able to catch the bus is:
A particle is projected with velocity v 0 along x-axis. The deceleration on the particle is proportional to the square of the distance from the origin i.e., a= α x 2 . The distance at which the particle stops is
A drunkard is walking along a straight road. He takes five steps forward and three steps backward and so on. Each step is 1 m long and takes 1 s. There is a pit on the road 11 m away from the starting point. The drunkard will fall into the pit after
A particle is moving in a straight line. and passes through a point o with a velocity of 6 ms -1 .The particle moves with a constant retardation of 2ms -2 for 4 s and there after moves with constant velocity. How long after leaving O does the particle return to O?
A bus is moving with a velocity 10 ms -1 on a straight road. A scooterist wishes to overtake the bus in 100 s. If the bus is at a distance of 1 km from the scooterist, with what velocity should the scooterist chase the bus?
A particle is thrown up inside a stationary lift of sufficient height. The time off light is T. Now it is thrown again with same initial speed v 0 with respect to lift. At the time of second throw, lift is moving up with speed v 0 and uniform acceleration g upward (the acceleration due to gravity). The new time of flight is
An insect moving along a straight line, travels in every second distance equal to the magnitude of time elapsed. Assuming acceleration to be constant, and the insect starts at t = 0. Find the magnitude of initial velocity of insect
On a highway, two buses A and B are running at the same velocity of magnitude 30 ms -1 . The brakes cause a deceleration or A ms -2 in bus A and that of bus B is 30 7 ms − 2 . In an emergency when driver of the front car applies brakes, immediately its rear light turns red and braking begins. In response, driver of the rear bus also applies brakes to avoid a collision with the front bus. Every driver takes 1 s to apply the brakes after he saw a need for it. If bus A ahead of bus B, then the minimum separation between the buses before driver of bus I applies the brake is x 1 . If bus B is running ahead of bus A, then the minimum separation between the buses before the driver of bus B applies brake is x 2 . The value of x 1 3 x 2 is .
A lift performs the first part of ascent with uniform acceleration a and the remainder with uniform retardation 2a. The lift starts from rest and finally comes to rest. If t is the time of ascent. Find the height ascended by lift.
A truck is moving at a speed of 72 kmh -1 on a straight road. The driver can produce deceleration of 2 ms -2 by applying brakes. The stopping distance of truck is 13x m, if the reaction time of the driver is 0.2 s. The value of x is .
A stone is thrown vertically upward. When the stone is at point A, its distance from a certain point O is 6 5 m at t=0 and the component of velocity along OA is nonzero. When it is at point B(OB=10 m), the component of velocity along OB is zero. When it is at point C (OC=6 m), the component of velocity of the particle along OC is zero. If the velocity of projection of the stone is v 0 = 5 n ms -1 , then the value of n is .
A train is moving on straight track with velocity v 0 = 13.5 ms -1 . To stop the train at a particular station, the driver applies brakes at t = 0, which is caused of a retardation proportional to the velocity of the train. The speed of train reduces 50% in the first 2 s in t 0 =4 . The velocity of train (in ms -1 ) at t=4s (Given , e=2.7)is
A fun drive in an amusement park runs between two spots that are 2.0 km apart. For safety reasons the acceleration of the drive is limited to ± 4.0 m/s 2 , and the jerk, or rate of change of acceleration, is limited to ± 1.0 m/s 2 . The drive has a maximum speed of 144 km/h. If the shortest time taken by the drive to travel between the spots is n 2 , The value of ‘n’ is .
A runner travels around a rectangular track of length 70m and width 30m. After travels around the rectangular track two times, runner back to starting point. Determine distance travelled by the runner.
A body starts from rest and moves with a uniform acceleration of 20 m/s 2 in the first 10s. During the next 10s it moves with the maximum velocity attained uniformly. The total displacement of the body is
A car, starting from rest, is accelerated at a constant rate α until it attains a speed ν . It is then retarded at a constant rate β until it comes to rest. The average speed of the car during its entire journey is
A particle moving in a straight line covers half the distance with speed of 3 ms -1 . The other half of the distance is covered in two equal time intervals with a speeds of 4.5 ms -1 and 7.5 ms -1 respectively. Find the average speed (in m/s) of the particle during this motion
A stone falls from rest. The distance covered by the stone in the last second of its motion equals the distance covered by it during the first three seconds of its motion. How long (in seconds) does the stone take to reach the ground? Take g = 10 ms – 2
Two particles A and B are at separation of 100 m, particle A moves with constant acceleration 4 m / s 2 with initial speed 5m/s and B moves with uniform speed 12m/s, towards each other. When and where the particles meet?
A bullet fired into a fixed target loses half of its velocity after penetrating through a distance 1 cm. How much further distance it will penetrate before coming to rest. (Assuming that it faces constant retardation)
A particle moves in a straight line with a constant acceleration. It changes its velocity from 10 m/s to 20 m/s while passing through a distance of 135 m in time of “t” second. Then value of time “t” is
Two cars A and B are at rest at the origin O. If A starts with a uniform velocity of 20 m/s and B starts in the same direction with a constant acceleration of 2m/ s 2 , then the cars will meet after time
Two trains travelling on the same track are approaching each other with an equal speed of 40 m/s. The drivers of the trains begin to decelerate simultaneously when they are just 2 km apart. Assuming the deceleration to be uniform and equal , the value of the deceleration to barely avoid collision should be …..in m/ s 2
A cyclist starts from rest and moves with a constant acceleration of 1 m/ s 2 . A boy who is 48 m behind the cyclist starts moving with a constant velocity of 10 m/s. After how much time the boy meets the cyclist?
A body” A ” starts from rest with an acceleration a 1 . After 2 sec, another body” B ” starts from rest with an acceleration a 2 . If they cover equal distances in the 5th second after the start of A, then the ratio of a 1 : a 2 is equal to
A particle is moving with constant acceleration from A to B in a straight line AB. If U and V are the velocities of particle at A and B respectively, then its velocity of particle at the midpoint C will be
A particle of mass m is dropped from a height h above the ground. At the same time another particle of the same mass is thrown vertically upwards from the ground with a speed of 2 g h . If they collide head-on completely inelastically, the time taken for the combined mass to reach the ground, in units of h g is :
The distance x covered by a particle in one dimensional motion varies with time t as x 2 = a t 2 + 2 b t + c . If the acceleration of the particle depends on x as x − n , where n is an integer, the value of n is
Train A and train B are running on parallel tracks in the opposite directions with speeds of 36km/hour and 72km/hour respectively. A person is walking in train A in the direction opposite to its motion with a speed of 1.8km/hour. Speed (in m s – 1 ) of this person as observed from train B will be close to: (take the distance between the tracks as negligible)
A Tennis ball is released from a height h and after freely falling on a wooden floor it rebounds and reaches height h 2 . The velocity versus height of the ball during its motion may be represented graphically by : (graph are drawn schematically and on not to scale)
The helicopter rises from rest on the ground vertically upwards with a constant acceleration g . A food packet is dropped from the helicopter when it is at a height h. Time taken by the packet to reach the ground is closed to [ g is the acceleration due to gravity]
A balloon is moving up in air vertically above a point A on the ground . When it is at high h 1 ,a girl standing at a distance d ( point B from A) (see figure) sees it at an angle 45 0 with respect to the vertical . When the balloon climbs up a further height h 2 , it is seen at an angle 60 0 with respect to the vertical if the girl moves further by a distance 2.464d (Point C) . Then the height h 2 is ( Given tan 30 0 =0.5774):
A particle moves from the point 2 i ^ + 4 j ^ m , at t=0 with an initial velocity 5 i ^ + 4 j ^ m / s . It is acted upon by a constant acceleration 4 i ^ + 4 j ^ m / s 2 . What is the distance of the particle from the origin at time t = 2 s in meters?
A police van moving on a high way with a speed of 30km/h fires a bullet at a thief’s car speeding away in the same direction with a speed of 192km/h. If the speed of the bullet with respect to police Van is 150m/s, with what relative speed does the bullet hit the thief’s car?
A truck is moving at a speed of 72 kmh -1 on a straight road. The driver can produce deceleration of 2 ms -2 by applying brakes. The stopping distance of truck is 13x m, if the reaction time of the driver is 0.2 s. The value of x is
On a city road, the last traffic light glows green for 60 s and red for 120 s. The range of speeds of vehicles is from 50 3 ms − 1 to 200 9 ms − 1 in a group. The speed of each vehicle is constant. It is found that at a distance x from traffic light, then the successive groups pushing through traffic light place will become indistinguishable. The value of x (in km) is
A length of path ACB is 1500 m and the length of the path ADB is 2100 m. Two particles start from point A simultaneously around the track ACBDA. One of them travels the track in clockwise sense and other in anticlockwise sense with their respective constant speeds. After 12 s from the start, the first time they meet at the point B. After minimum time (in s) in which they meet first at point B, will they again meet at the point B is time t min = ( 12 ) x s . The value of x is
A thief is running away on a straight road in jeep moving with a speed of 9 ms -1 A police man chases him on a motor cycle moving at a speed of 10 ms -1 . If the instantaneous separation of the jeep from the motorcycle is 100 m, how long will it take for the police to catch the thief
Two trains one of length 100 m and another of length 125 m, are moving in mutually opposite directions along parallel lines, meet each other, each with speed 10 m/s. If their acceleration are 0.3 m/s 2 and 0.2 m/s 2 , respectively, then the time they take to pass each other will be
A body is projected vertically up with a velocity v and after some time it returns to the point from which it was projected. The average velocity and average speed of the body for the total time of flight are
A particle moving in a straight line covers half the distance with speed of 3 m/s. The other half of the distance is covered in two equal time intervals with speed of 4.5 m/s and 7 .5 m/s respectively. The average speed of the particle during this motion is
A ball is dropped vertically from a height d above the ground. It hits the ground and bounces up vertically to a height d/2. Neglecting subsequent motion and air resistance, its velocity y varies with the height h above the ground is correctly shown in
From the top of the tower of height 400 m, & ball is dropped by a man, simultaneously from the base of the tower, another ball is thrown up with a velocity 50 m/s; at what distance will they meet from the base of the tower?
Two trains, which are moving along different tracks in opposite directions, are put on the same track due to a mistake. Their drivers, on noticing the mistake, start slowing down the trains when the trains are 300 m apart. Graphs given below show their velocities as function of time as the trains slow down. The separation between the trains when both have stopped is
Between two stations, a train accelerates from rest uniformly at first, then moves with constant velocity, and finally retards uniformly to come to rest. If the ratio of the time taken is 1:8:1 and the maximum speed attained be 60 km h -1 , then what is the average speed over the whole journey?
A particle starts from the origin with a velocity of 10 m s -1 and moves with a constant acceleration till the velocity increases to 50 ms -1 . At that instant, the acceleration is suddenly reversed. What will be the velocity of the particle, when it returns to the starting point?
The displacement x of a particle moving in one dimension under the action of a constant force is related to time t by the equation t = x + 3 , wherex is in meters and t is in seconds. Find the displacement of the particle when its velocity is zero.
A body starts from rest and travels a distance S with uniform acceleration, then moves uniformly a distance 2S uniformly, and finally comes to rest after moving further 5S under uniform retardation. The ratio of the average velocity to maximum velocity is
A police party is chasing a dacoit in a jeep which is moving at a constant speed v. The dacoit is on a motorcycle. When he is at a distance x from the jeep, he accelerates from rest at a constant rate. Which of the following relations is true if the police is able to catch the dacoit?
The average velocity of a body moving with uniform acceleration after travelling a distance of 3.06 m is 0.34 ms -1 . If the change in velocity of the body is 0.18 ms -1 during this time, its uniform acceleration is
A stone is dropped from the top of a tower of height h. After I s another stone is dropped from the balcony 20 m below the top. Both reach the bottom simultaneously. What is the value of h? Take g=10 ms -2 .
A stone is dropped from a certain height which can reach the ground in 5 s. It is stopped after 3 s of its fall and then it is again released. The total time taken by the stone to reach the ground will be
A ball is thrown from the top of a tower in vertically upward direction. The velocity at a point h meter below the point of projection is twice of the velocity at a point h meter above the point of projection. Find the maximum height reached by the ball above the top of tower.
A juggler keeps on moving four balls in air throwing the balls after regular intervals. When one ball leaves his hand (speed = 20 m s -1 ), the position of other balls (height in meter) will be (take g = 10 ms -2 )
A body is thrown vertically upwards from A, the top of a tower. It reaches the ground in time t 1 . lf it is thrown vertically downwards from A with the same speed, it reaches the ground in time t 2 . If it is allowed to fall freely from A, then the time it takes to reach the ground is given by
A paruchutist drops first freely from an aeroplane for 10 s and then his parachute opens out. Now he descends with a net retardation of 2.5 ms -2 . If he bails out of the plane at a height of 2495 m and g = 10 ms -2 , his velocity on reaching the ground will be
Water drops fall from a tap on the floor 5 m below at regular intervals of time, the first drop striking the floor when the fifth drop begins to fall. The height at which the third drop will be from ground (at the instant when the first drop strikes the ground) will be g = 10 ms − 2
A thief is running away on a straight road in a jeep moving with a speed of 9 ms -1 . A policeman chases him on a motor cycle moving at a speed of 10 m s -1 . If the instantaneous separation of the jeep from the motor cycle is 100 m, how long will it take for the policeman to catch the thief?
A train is moving at a constant speed V when its driver observes another train in front of him on the same track and moving in the same direction with constant speed v. If the distance between the trains is x, then what should be the minimum retardation of the train so as to avoid collision?
A person A is sitting in one train while another person B is in the second train. The trains are moving with velocities 60 m/s and 40 m/s, respectively, in the same direction. Then the velocity of B relative to A will be
Imagine yourself standing in an elevator which is moving with an upward acceleration a = 2 m / s 2 .A coin is dropped from rest from the roof of the elevator, relative to you. The roof to floor height of the elevator is 1.5 m. (Take g = 10 m / s 2 ). rind the velocity of the coin relative to you when it strikes the base of the elevator.
It takes one minute for a passenger standing on an escalator to reach the top. If the escalator does not move it takes him 3 minute to walk up. How long will it take for the passenger to arrive at the top if he walks up the moving escalator?
A bird flies to and fro between two cars which move with velocities v 1 =20 m/s and v 2 =30 m/s. If the speed of the bird is v 3 =10 m/s and the initial distance of separation between them is d=2 km, find the total distance covered by the bird till the cars meet.
The drawing shows velocity (v) versus time (r) graphs for two cyclists moving along the same straight segment of a highway from the same point. The second cyclist starts moving at t = 3 min. At what time do the two cyclists meet?
train normally travels at a uniform speed of 72V,km/h on a long stretch of straight level track. On a particular day, the train was forced to make a 2.0 minute stop at a station along this track. If the train decelerates at a uniform rate of 1.0 m/s 2 and accelerates at a rate of 0.50 m/s 2 , how much time is lost in stopping at the station?
Each of the four particles move along an x-axis. Their coordinates (in metres) as function of time (in seconds) are given by Particle 1: x ( t ) = 3.5 − 2.7 t 3 Particle 2 : x ( t ) = 3.5 + 2.7 t 3 Particle 3: x ( t ) = 3.5 + 2.7 t 2 Particle 4: x ( t ) = 3.5 − 3.4 t − 2.7 t 2 which of these particles is speeding up for t > 0?
When two bodies move uniformly towards each other, the distance between them diminishes by 16 m every 10 s. If bodies move with velocities of the same magnitude and in the same direction as before the distance between then will decrease 3 m every 5 s. The velocity of each body is.
Two objects moving along the same straight line are leaving point A with an acceleration a,2a and velocity 2u, u respectively at time t=0. The distance moved by the object with respect to point A when one object overtakes the other is
Two particles P and Q start from rest and move for equal time on a straight line. Particle P has an acceleration of X m/s 2 for the first half of the total time and 2X m/s 2 for the second half. The particle Q has an acceleration of 2X m/s 2 for the first half of the total time and Xm/s 2 for the second half. Which particle has covered larger distance?
A particle moving along a straight line with a constant acceleration of 4 m/s 2 passes through a point A on the line with a velocity of +8 m/s at some moment. Find the distance travelled by the particle in 5 seconds after that moment.
A stone is dropped from the top of a tower. When it has fallen by 5 m from the top, another stone is dropped from a point 25 m below the top. Ifboth stones reach the ground at the same moment, then height of the tower is (take g = 10 m/s 2 )
Two bikes A and B start from a point. A moves with uniform speed 40 m/s and B starts from rest with uniform acceleration 2 m/s 2 . lf B starts at t =10 and A starts from the same point at t = 10 s, then the time during the journey in which A was ahead of B is
On a city road, the last traffic light glows green for 60 s and red for 120 s. The range of speeds of vehicles is from 50 3 ms − 1 to 200 9 ms − 1 in a group. The speed of each vehicle is constant. It is found that at a distance x from traffic light, then the successive groups pushing through traffic light place will become indistinguishable. The value of x (in km) is .
A length of path ACB is 1500 m and the length of the path ADB is 2100 m. Two particles start from point A simultaneously around the track ACBDA. One of them travels the track in clockwise sense and other in anticlockwise sense with their respective constant speeds. After 12 s from the start, the first time they meet at the point B. After minimum time (in s) in which they meet first at point B, will they again meet at the point t min = ( 12 ) x s . The value of x is .
Balls A and B are released from rest from roof of a building at t = 0 and t = 2 s, respectively. The ball A strikes with ground and comes back with same speed. After some time, both balls A and B meet with each other at a height of 55 m from the ground. If the height of building is 60n metre, then the value of n is .
A ball is thrown vertically upward from the roof of a building with a certain velocity. It reaches the ground in 9 s. When it is thrown downward from the roof with the same initial speed, it takes 4 s to come at ground. How much time (in second) will it take to reach at ground if it just released from the rest from the roof?
A ball is released from rest from top of a tower. The retardation due to air resistance is bv, where b is 10 per second and velocity v is ms -1 . The velocity of ball at t = 1 10 s is n 27 ms − 1 The value of n (Given, e=2.7) is .
The maximum acceleration or deceleration that a train may have is a = 5 ms -2 . The minimum time in which the train may reach from one station to the other separated by a distance d=500 m is t 0 =5 n s. The value of n is .
Two motorboats, which can move with velocities 4.0 m/s and 6.0 m/s relative to water are going up-stream. When the faster one overtakes the slower one, a buoy is dropped from the slower one. After lapse of sometime both the boats turn back simultaneously and move at the same speeds relative to the water as before. Their engines are switched off when they reach the buoy again If the maximum separation between the boats is 200 m after the buoy is dropped and water flow velocity is 1.5 m/s, find the distance between the two places where the boats meet the buoy is found to be 100 × n meters, the value of ‘n’ is .
An object falls from a bridge that is 45m above the water. It falls directly into a small row-boat moving with constant velocity that was 12m from the point of impact when the object was released. What was the speed of the boat?
A body starts from rest and travels with a uniform acceleration of 5 m / s 2 and then decelerates at a uniform rate of 3 m / s 2 again to come to rest. Total time of travel is 10s . Then the maximum velocity attained by the body is
A helicopter is flying horizontally at 8 m/s at an altitude 180 m when a package of emergency medical supplies is ejected horizontally backward with a speed of 12 m/s relative to the helicopter. Ignoring air resistance, what is the horizontal distance between the package and the helicopter when the package hits the ground?
A very broad elevator plateform is going up vertically with a constant acceleration 1 ms -2 . At the instant when the velocity of the lift is 2 m/s, a stone is projected from the plateform with a speed of 20 m/s relative to the floor at an elevation 30 o . The time taken by the stone to return to the floor will be
A car is moving at a certain speed. The minimum distance over which it can be stopped is x. If the speed of the car is doubled, what will be the minimum distance over which the car can be stopped for the same retardation?
A parachutist drops freely from an airplane for 10 s before the parachute opens. He then descends with a uniform retardation of 2.5 ms -2 . If he bails out of the plane at a height of 2495 m and g is 10 ms -2 , his velocity on reaching the ground will be
A particle moving in a straight line covers half the distance with a speed of 3 m/s. The other half of the distance is covered in two equal time intervals with speeds of 4.5 m/s and 7.5 m/s respectively. The average speed (in m/s) of the particle during this motion is
A body, moving in a straight line with an initial velocity of 5 ms -1 and a constant acceleration, covers a distance of 30 m in the 3 rd second. How much distance (in m) will it cover in the next 2 seconds?
Velocity of a particle moving in a straight line varies with its displacement as v = ( 4 + 4 s ) m / s . Displacement of a particle at time t = 0 i s s = 0 . Find displacement of particle at time t = 2 s .
From ground a balloon starts ascending at a constant speed of 25 m/s. After 5 sec a bullet is shot vertically upward from the ground. Find the minimum speed of bullet at which it is able to hit the balloon.
A police jeep is chasing a culprit going on a motorbike. The motorbike crosses a turning at a speed of 72 km h -1 . The jeep follows it at a speed of 90 km h -1 , crossing the turning 10 s later than the bike. The distance (in km) from turning point at which the police catch the culprit is .
A person walks up a stalled 15 m long escalator in 90 s . When standing on the same escalator now moving, the person is carried up in 60 s . How much time would it take that person to walk up the moving escalator? . Does the answer depend on the length of the escalator?
The retardation experienced by a moving motor boat, after its engine is cut-off, is given by dv dt = – kv 3 where k is a constant. If v 0 is the magnitude of the velocity at cut-off, the magnitude of the velocity at time t after the cut-off is |
6 CHAPTERStatistical QualityControl Before studying this chapter you should know or, if necessary, review 1. Quality as a competitive priority, Chapter 2, page 00. 2. Total quality management (TQM) concepts, Chapter 5, pages 00 – 00. LEARNING OBJECTIVES After studying this chapter you should be able to 1 Describe categories of statistical quality control (SQC). 2 Explain the use of descriptive statistics in measuring quality characteristics. 3 Identify and describe causes of variation. 4 Describe the use of control charts. 5 Identify the differences between x-bar, R-, p-, and c-charts. 6 Explain the meaning of process capability and the process capability index. 7 Explain the term Six Sigma. 8 Explain the process of acceptance sampling and describe the use of operating characteristic (OC) curves. 9 Describe the challenges inherent in measuring quality in service organizations. CHAPTER OUTLINE What Is Statistical Quality Control? 172 Acceptance Sampling 196 Links to Practice: Intel Corporation 173 Implications for Managers 203 Sources of Variation: Common and Assignable Statistical Quality Control in Services 204 Causes 174 Links to Practice: The Ritz-Carlton Hotel Company, Descriptive Statistics 174 L.L.C.; Nordstrom, Inc. 205 Statistical Process Control Methods 176 Links to Practice: Marriott International, Inc. 205 Control Charts for Variables 178 OM Across the Organization 206 Control Charts for Attributes 184 Inside OM 206 C-Charts 188 Case: Scharadin Hotels 216 Process Capability 190 Case: Delta Plastics, Inc. (B) 217 Links to Practice: Motorola, Inc. 196 000 171
172 • CHAPTER 6 STATISTICAL QUALITY CONTROL e have all had the experience of purchasing a prod- W uct only to discover that it is defective in some way or does not function the way it was designed to. This could be a new backpack with a broken zipper or an “out of the box” malfunctioning computer printer. Many of us have struggled to assemble a product the manufacturer has indicated would need only “minor” assembly, only to find that a piece of the product is missing or defective. As consumers, we expect the products we purchase to func- tion as intended. However, producers of products know that it is not always possible to inspect every product and every aspect of the production process at all times. The challenge is to design ways to maximize the ability to monitor the quality of products being produced and eliminate defects. One way to ensure a quality product is to build quality into the process. Consider Steinway & Sons, the premier maker of pianos used in concert halls all over the world. Steinway has been making pianos since the 1880s. Since that time the company’s manufacturing process has not changed significantly. It takes the company nine months to a year to produce a piano by fashioning some 12,000-hand crafted parts, carefully measuring and monitoring every part of the process. While many of Stein- way’s competitors have moved to mass production, where pianos can be assembled in 20 days, Steinway has maintained a strategy of quality defined by skill and craftsman- ship. Steinway’s production process is focused on meticulous process precision and extremely high product consistency. This has contributed to making its name synony- mous with top quality. WHAT IS STATISTICAL QUALITY CONTROL? In Chapter 5 we learned that total quality management (TQM) addresses organiza- tional quality from managerial and philosophical viewpoints. TQM focuses on customer-driven quality standards, managerial leadership, continuous improvement, quality built into product and process design, quality identified problems at theMarketing, Management, source, and quality made everyone’s responsibility. However, talking about solving Engineering quality problems is not enough. We need specific tools that can help us make the right quality decisions. These tools come from the area of statistics and are used to help᭤ Statistica1 quality control(SQC) identify quality problems in the production process as well as in the product itself.The general category of Statistical quality control is the subject of this chapter.statistical tools used to Statistica1 quality control (SQC) is the term used to describe the set of statisticalevaluate organizational tools used by quality professionals. Statistical quality control can be divided into threequality. broad categories:᭤ Descriptive statisticsStatistics used to describe 1. Descriptive statistics are used to describe quality characteristics and relation-quality characteristics and ships. Included are statistics such as the mean, standard deviation, the range,relationships. and a measure of the distribution of data.
WHAT IS STATISTICAL QUALITY CONTROL? • 173 2. Statistical process control (SPC) involves inspecting a random sample of the ᭤ Statistical process output from a process and deciding whether the process is producing products control (SPC) with characteristics that fall within a predetermined range. SPC answers the A statistical tool that involves inspecting a random sample question of whether the process is functioning properly or not. of the output from a process 3. Acceptance sampling is the process of randomly inspecting a sample of goods and deciding whether the and deciding whether to accept the entire lot based on the results. Acceptance process is producing products sampling determines whether a batch of goods should be accepted or rejected. with characteristics that fall within a predetermined The tools in each of these categories provide different types of information for use in range.analyzing quality. Descriptive statistics are used to describe certain quality characteris- ᭤ Acceptance samplingtics, such as the central tendency and variability of observed data. Although descriptions The process of randomlyof certain characteristics are helpful, they are not enough to help us evaluate whether inspecting a sample of goods and deciding whether tothere is a problem with quality. Acceptance sampling can help us do this. Acceptance accept the entire lot based onsampling helps us decide whether desirable quality has been achieved for a batch of the results.products, and whether to accept or reject the items produced. Although this informa-tion is helpful in making the quality acceptance decision after the product has been pro-duced, it does not help us identify and catch a quality problem during the productionprocess. For this we need tools in the statistical process control (SPC) category. All three of these statistical quality control categories are helpful in measuring andevaluating the quality of products or services. However, statistical process control(SPC) tools are used most frequently because they identify quality problems duringthe production process. For this reason, we will devote most of the chapter to thiscategory of tools. The quality control tools we will be learning about do not onlymeasure the value of a quality characteristic. They also help us identify a change orvariation in some quality characteristic of the product or process. We will first seewhat types of variation we can observe when measuring quality. Then we will be ableto identify specific tools used for measuring this variation.Variation in the production process LINKS TO PRACTICEleads to quality defects and lack of Intel Corporationproduct consistency. The Intel Cor- www.intel.comporation, the world’s largest andmost profitable manufacturer ofmicroprocessors, understands this.Therefore, Intel has implemented aprogram it calls “copy-exactly” at allits manufacturing facilities. Theidea is that regardless of whetherthe chips are made in Arizona, NewMexico, Ireland, or any of its otherplants, they are made in exactly thesame way. This means using the same equipment, the same exact materials, and workersperforming the same tasks in the exact same order. The level of detail to which the“copy-exactly” concept goes is meticulous. For example, when a chipmaking machinewas found to be a few feet longer at one facility than another, Intel made them match.When water quality was found to be different at one facility, Intel instituted a purifica-tion system to eliminate any differences. Even when a worker was found polishingequipment in one direction, he was asked to do it in the approved circular pattern. Whysuch attention to exactness of detail? The reason is to minimize all variation. Now let’slook at the different types of variation that exist.
174 • CHAPTER 6 STATISTICAL QUALITY CONTROL SOURCES OF VARIATION: COMMON AND ASSIGNABLE CAUSES If you look at bottles of a soft drink in a grocery store, you will notice that no two bottles are filled to exactly the same level. Some are filled slightly higher and some slightly lower. Similarly, if you look at blueberry muffins in a bakery, you will notice that some are slightly larger than others and some have more blueberries than others. These types of differences are completely normal. No two products are exactly alike because of slight differences in materials, workers, machines, tools, and other factors.᭤ Common causes of These are called common, or random, causes of variation. Common causes of varia-variation tion are based on random causes that we cannot identify. These types of variation areRandom causes that cannot unavoidable and are due to slight differences in processing.be identified. An important task in quality control is to find out the range of natural random variation in a process. For example, if the average bottle of a soft drink called Cocoa Fizz contains 16 ounces of liquid, we may determine that the amount of natural vari- ation is between 15.8 and 16.2 ounces. If this were the case, we would monitor the production process to make sure that the amount stays within this range. If produc- tion goes out of this range — bottles are found to contain on average 15.6 ounces — this would lead us to believe that there is a problem with the process because the vari- ation is greater than the natural random variation. The second type of variation that can be observed involves variations where the᭤ Assignable causes of causes can be precisely identified and eliminated. These are called assignable causesvariation of variation. Examples of this type of variation are poor quality in raw materials, anCauses that can be identified employee who needs more training, or a machine in need of repair. In each of theseand eliminated. examples the problem can be identified and corrected. Also, if the problem is allowed to persist, it will continue to create a problem in the quality of the product. In the ex- ample of the soft drink bottling operation, bottles filled with 15.6 ounces of liquid would signal a problem. The machine may need to be readjusted. This would be an assignable cause of variation. We can assign the variation to a particular cause (ma- chine needs to be readjusted) and we can correct the problem (readjust the machine). DESCRIPTIVE STATISTICS Descriptive statistics can be helpful in describing certain characteristics of a product and a process. The most important descriptive statistics are measures of central ten- dency such as the mean, measures of variability such as the standard deviation and range, and measures of the distribution of data. We first review these descriptive sta- tistics and then see how we can measure their changes. The Mean In the soft drink bottling example, we stated that the average bottle is filled with᭤ Mean (average) 16 ounces of liquid. The arithmetic average, or the mean, is a statistic that measuresA statistic that measures the the central tendency of a set of data. Knowing the central point of a set of data is highlycentral tendency of a set of important. Just think how important that number is when you receive test scores!data. To compute the mean we simply sum all the observations and divide by the total number of observations. The equation for computing the mean is n ͚xi iϭ1 xϭ n
DESCRIPTIVE STATISTICS • 175where x ϭ the mean xi ϭ observation i, i ϭ 1, . . . , n n ϭ number of observationsThe Range and Standard DeviationIn the bottling example we also stated that the amount of natural variation in thebottling process is between 15.8 and 16.2 ounces. This information provides us withthe amount of variability of the data. It tells us how spread out the data is around themean. There are two measures that can be used to determine the amount of variationin the data. The first measure is the range, which is the difference between the largest ᭤ Rangeand smallest observations. In our example, the range for natural variation is 0.4 The difference between theounces. largest and smallest observations in a set of data. Another measure of variation is the standard deviation. The equation for comput-ing the standard deviation is ᭤ Standard deviation √ n A statistic that measures the ͚ (x i Ϫ x)2 amount of data dispersion ϭ iϭ1 around the mean. nϪ1where ϭ standard deviation of a sample x ϭ the mean xi ϭ observation i, i ϭ 1, . . . , n n ϭ the number of observations in the sampleSmall values of the range and standard deviation mean that the observations areclosely clustered around the mean. Large values of the range and standard deviationmean that the observations are spread out around the mean. Figure 6-1 illustrates thedifferences between a small and a large standard deviation for our bottling operation.You can see that the figure shows two distributions, both with a mean of 16 ounces.However, in the first distribution the standard deviation is large and the data arespread out far around the mean. In the second distribution the standard deviation issmall and the data are clustered close to the mean. FIGURE 6-1 Normal distributions with varying FIGURE 6-2 Differences between symmetric and standard deviations skewed distributions Skewed distribution Small standard deviation Large standard deviation Symmetric distribution 15.7 15.8 15.9 16.0 16.1 16.2 16.3 15.7 15.8 15.9 16.0 16.1 16.2 16.3 Mean Mean
176 • CHAPTER 6 STATISTICAL QUALITY CONTROL Distribution of Data A third descriptive statistic used to measure quality characteristics is the shape of the distribution of the observed data. When a distribution is symmetric, there are the same number of observations below and above the mean. This is what we commonly find when only normal variation is present in the data. When a disproportionate number of observations are either above or below the mean, we say that the data has a skewed distribution. Figure 6-2 shows symmetric and skewed distributions for the bot- tling operation. STATISTICAL PROCESS CONTROL METHODS Statistical process control methods extend the use of descriptive statistics to monitor the quality of the product and process. As we have learned so far, there are common and assignable causes of variation in the production of every product. Using statistical process control we want to determine the amount of variation that is common or nor- mal. Then we monitor the production process to make sure production stays within this normal range. That is, we want to make sure the process is in a state of control. The most commonly used tool for monitoring the production process is a control chart. Different types of control charts are used to monitor different aspects of the produc- tion process. In this section we will learn how to develop and use control charts. Developing Control Charts᭤ Control chart A control chart (also called process chart or quality control chart) is a graph thatA graph that shows whether a shows whether a sample of data falls within the common or normal range of varia-sample of data falls within the tion. A control chart has upper and lower control limits that separate common fromcommon or normal range ofvariation. assignable causes of variation. The common range of variation is defined by the use of control chart limits. We say that a process is out of control when a plot of data reveals᭤ Out of controlThe situation in which a plot that one or more samples fall outside the control limits.of data falls outside preset Figure 6-3 shows a control chart for the Cocoa Fizz bottling operation. The x axiscontrol limits. represents samples (#1, #2, #3, etc.) taken from the process over time. The y axis rep- resents the quality characteristic that is being monitored (ounces of liquid). The cen- ter line (CL) of the control chart is the mean, or average, of the quality characteristic that is being measured. In Figure 6-3 the mean is 16 ounces. The upper control limit (UCL) is the maximum acceptable variation from the mean for a process that is in a state of control. Similarly, the lower control limit (LCL) is the minimum acceptable variation from the mean for a process that is in a state of control. In our example, the FIGURE 6-3 Observation out of control Variation due to Quality control chart for assignable causes Cocoa Fizz Volume in ounces UCL = (16.2) CL = (16.0) Variation due to normal causes LCL = (15.8) #1 #2 #3 #4 #5 #6 Variation due Sample Number to assignable causes
STATISTICAL PROCESS CONTROL METHODS • 177upper and lower control limits are 16.2 and 15.8 ounces, respectively. You can see thatif a sample of observations falls outside the control limits we need to look for assigna-ble causes. The upper and lower control limits on a control chart are usually set at Ϯ3 stan-dard deviations from the mean. If we assume that the data exhibit a normal distribu-tion, these control limits will capture 99.74 percent of the normal variation. Controllimits can be set at Ϯ2 standard deviations from the mean. In that case, control limitswould capture 95.44 percent of the values. Figure 6-4 shows the percentage of valuesthat fall within a particular range of standard deviation. Looking at Figure 6-4, we can conclude that observations that fall outside the set rangerepresent assignable causes of variation. However, there is a small probability that a valuethat falls outside the limits is still due to normal variation. This is called Type I error, withthe error being the chance of concluding that there are assignable causes of variationwhen only normal variation exists. Another name for this is alpha risk (␣), where alpharefers to the sum of the probabilities in both tails of the distribution that falls outside theconfidence limits. The chance of this happening is given by the percentage or probabilityrepresented by the shaded areas of Figure 6-5. For limits of Ϯ3 standard deviations fromthe mean, the probability of a Type I error is .26% (100% Ϫ 99.74%), whereas for limitsof Ϯ2 standard deviations it is 4.56% (100% Ϫ 95.44%).Types of Control ChartsControl charts are one of the most commonly used tools in statistical process control.They can be used to measure any characteristic of a product, such as the weight of acereal box, the number of chocolates in a box, or the volume of bottled water. Thedifferent characteristics that can be measured by control charts can be divided intotwo groups: variables and attributes. A control chart for variables is used to monitor ᭤ Variablecharacteristics that can be measured and have a continuum of values, such as height, A product characteristic thatweight, or volume. A soft drink bottling operation is an example of a variable mea- can be measured and has a continuum of values (e.g.,sure, since the amount of liquid in the bottles is measured and can take on a number height, weight, or volume).of different values. Other examples are the weight of a bag of sugar, the temperatureof a baking oven, or the diameter of plastic tubing. ᭤ Attribute A product characteristic that has a discrete value and can be counted. FIGURE 6-4 Percentage of values captured by different FIGURE 6-5 Chance of Type I error for Ϯ3 ranges of standard deviation (sigma-standard deviations) Type 1 error is .26% –3σ –2σ Mean +2σ +3σ –3σ –2σ Mean +2σ +3σ 95.44% 99.74% 99.74%
178 • CHAPTER 6 STATISTICAL QUALITY CONTROL A control chart for attributes, on the other hand, is used to monitor characteristics that have discrete values and can be counted. Often they can be evaluated with a sim- ple yes or no decision. Examples include color, taste, or smell. The monitoring of attributes usually takes less time than that of variables because a variable needs to be measured (e.g., the bottle of soft drink contains 15.9 ounces of liquid). An attribute requires only a single decision, such as yes or no, good or bad, acceptable or unaccept- able (e.g., the apple is good or rotten, the meat is good or stale, the shoes have a defect or do not have a defect, the lightbulb works or it does not work) or counting the number of defects (e.g., the number of broken cookies in the box, the number of dents in the car, the number of barnacles on the bottom of a boat). Statistical process control is used to monitor many different types of variables and attributes. In the next two sections we look at how to develop control charts for vari- ables and control charts for attributes. CONTROL CHARTS FOR VARIABLES Control charts for variables monitor characteristics that can be measured and have a continuous scale, such as height, weight, volume, or width. When an item is inspected, the variable being monitored is measured and recorded. For example, if we were produc- ing candles, height might be an important variable. We could take samples of candles and measure their heights. Two of the most commonly used control charts for variables mon- itor both the central tendency of the data (the mean) and the variability of the data (ei- ther the standard deviation or the range). Note that each chart monitors a different type of information. When observed values go outside the control limits, the process is as- sumed not to be in control. Production is stopped, and employees attempt to identify the cause of the problem and correct it. Next we look at how these charts are developed. Mean (x-Bar) Charts᭤ x-bar chart A mean control chart is often referred to as an x-bar chart. It is used to monitorA control chart used to changes in the mean of a process. To construct a mean chart we first need to constructmonitor changes in the mean the center line of the chart. To do this we take multiple samples and compute theirvalue of a process. means. Usually these samples are small, with about four or five observations. Each sample has its own mean, x. The center line of the chart is then computed as the mean of all sample means, where is the number of samples: x ϩ x2 ϩ и и и x xϭ 1 To construct the upper and lower control limits of the chart, we use the following formulas: Upper control limit (UCL) ϭ x ϩ zx Lower control limit (LCL) ϭ x Ϫ zx where x ϭ the average of the sample means z ϭ standard normal variable (2 for 95.44% confidence, 3 for 99.74% confidence) x ϭ standard deviation of the distribution of sample means, computed as /√n ϭ population (process) standard deviation n ϭ sample size (number of observations per sample) Example 6.1 shows the construction of a mean (x-bar) chart.
CONTROL CHARTS FOR VARIABLES • 179A quality control inspector at the Cocoa Fizz soft drink company has taken twenty-five samples with EXAMPLE 6.1four observations each of the volume of bottles filled. The data and the computed means are shownin the table. If the standard deviation of the bottling operation is 0.14 ounces, use this information Constructing ato develop control limits of three standard deviations for the bottling operation. Mean (x-Bar) Chart Observations Sample (bottle volume in ounces) Average Range Number 1 2 3 4 x R 1 15.85 16.02 15.83 15.93 15.91 0.19 2 16.12 16.00 15.85 16.01 15.99 0.27 3 16.00 15.91 15.94 15.83 15.92 0.17 4 16.20 15.85 15.74 15.93 15.93 0.46 5 15.74 15.86 16.21 16.10 15.98 0.47 6 15.94 16.01 16.14 16.03 16.03 0.20 7 15.75 16.21 16.01 15.86 15.96 0.46 8 15.82 15.94 16.02 15.94 15.93 0.20 9 16.04 15.98 15.83 15.98 15.96 0.21 10 15.64 15.86 15.94 15.89 15.83 0.30 11 16.11 16.00 16.01 15.82 15.99 0.29 12 15.72 15.85 16.12 16.15 15.96 0.43 13 15.85 15.76 15.74 15.98 15.83 0.24 14 15.73 15.84 15.96 16.10 15.91 0.37 15 16.20 16.01 16.10 15.89 16.05 0.31 16 16.12 16.08 15.83 15.94 15.99 0.29 17 16.01 15.93 15.81 15.68 15.86 0.33 18 15.78 16.04 16.11 16.12 16.01 0.34 19 15.84 15.92 16.05 16.12 15.98 0.28 20 15.92 16.09 16.12 15.93 16.02 0.20 21 16.11 16.02 16.00 15.88 16.00 0.23 22 15.98 15.82 15.89 15.89 15.90 0.16 23 16.05 15.73 15.73 15.93 15.86 0.32 24 16.01 16.01 15.89 15.86 15.94 0.15 25 16.08 15.78 15.92 15.98 15.94 0.30 Total 398.75 7.17• Solution The center line of the control data is the average of the samples: 398.75 xϭ 25 x ϭ 15.95 The control limits are UCL ϭ x ϩ zx ϭ 15.95 ϩ 3 .14 ϭ 16.16 √4 LCL ϭ x Ϫ zx ϭ 15.95 Ϫ 3 .14 ϭ 15.74 √4
CONTROL CHARTS FOR VARIABLES • 181 A B C D E F G 39 Computations for X-Bar Chart D40: =F32 40 Overall Mean (Xbar-bar) = 15.95 41 Sigma for Process = 0.14 ounces D42: =D41/SQRT(D34) 42 Standard Error of the Mean = 0.07 43 Z-value for control charts = 3 44 D45: =D40 45 CL: Center Line = 15.95 D46: =D40-D43*D42 46 LCL: Lower Control Limit = 15.74 D47: =D40+D43*D42 47 UCL: Upper Control Limit = 16.16 Another way to construct the control limits is to use the sample range as anestimate of the variability of the process. Remember that the range is simply the dif-ference between the largest and smallest values in the sample. The spread of the rangecan tell us about the variability of the data. In this case control limits would beconstructed as follows: Upper control limit (UCL) ϭ x ϩ A2 R Lower control limit (LCL) ϭ x Ϫ A2 Rwhere x ϭ average of the sample means R ϭ average range of the samples A2 ϭ factor obtained from Table 6-1.Notice that A2 is a factor that includes three standard deviations of ranges and is de-pendent on the sample size being considered.A quality control inspector at Cocoa Fizz is using the data from Example 6.1 to develop control EXAMPLE 6.2limits. If the average range (R) for the twenty-five samples is .29 ounces (computed as 7.17 ) and the 25average mean (x) of the observations is 15.95 ounces, develop three-sigma control limits for the Constructingbottling operation. a Mean (x-Bar) Chart from the• Solution Sample Range x ϭ 15.95 ounces R ϭ .29The value of A2 is obtained from Table 6.1. For n ϭ 4, A2 ϭ .73. This leads to the followinglimits: The center of the control chart ϭ CL ϭ 15.95 ounces UCL ϭ x ϩ A2 R ϭ 15.95 ϩ (.73)(.29) ϭ 16.16 LCL ϭ x Ϫ A2 R ϭ 15.95 Ϫ (.73)(.29) ϭ 15.74
182 • CHAPTER 6 STATISTICAL QUALITY CONTROL TABLE 6-1 Factor for x-Chart Factors for R-Chart Sample Size n A2 D3 D4Factors for three-sigma controllimits of x and R-charts 2 1.88 0 3.27Source: Factors adapted from theASTM Manual on Quality 3 1.02 0 2.57Control of Materials. 4 0.73 0 2.28 5 0.58 0 2.11 6 0.48 0 2.00 7 0.42 0.08 1.92 8 0.37 0.14 1.86 9 0.34 0.18 1.82 10 0.31 0.22 1.78 11 0.29 0.26 1.74 12 0.27 0.28 1.72 13 0.25 0.31 1.69 14 0.24 0.33 1.67 15 0.22 0.35 1.65 16 0.21 0.36 1.64 17 0.20 0.38 1.62 18 0.19 0.39 1.61 19 0.19 0.40 1.60 20 0.18 0.41 1.59 21 0.17 0.43 1.58 22 0.17 0.43 1.57 23 0.16 0.44 1.56 24 0.16 0.45 1.55 25 0.15 0.46 1.54 Range (R) Charts᭤ Range (R) chart Range (R) charts are another type of control chart for variables. Whereas x-barA control chart that monitors charts measure shift in the central tendency of the process, range charts monitorchanges in the dispersion or the dispersion or variability of the process. The method for developing and usingvariability of process. R-charts is the same as that for x-bar charts. The center line of the control chart is the average range, and the upper and lower control limits are computed as fol- lows: CL ϭ R UCL ϭ D4 R LCL ϭ D3 R where values for D4 and D3 are obtained from Table 6-1.
CONTROL CHARTS FOR VARIABLES • 183The quality control inspector at Cocoa Fizz would like to develop a range (R) chart in order to mon- EXAMPLE 6.3itor volume dispersion in the bottling process. Use the data from Example 6.1 to develop controllimits for the sample range. Constructing a Range (R) Chart• SolutionFrom the data in Example 6.1 you can see that the average sample range is: 7.17 Rϭ 25 R ϭ 0.29 nϭ4 From Table 6-1 for n ϭ 4: D4 ϭ 2.28 D3 ϭ 0 UCL ϭ D4 R ϭ 2.28 (0.29) ϭ 0.6612 LCL ϭ D3 R ϭ 0 (0.29) ϭ 0 The resulting control chart is: 0.70 0.60 0.50 0.40 Ounces 0.30 0.20 0.10 0.00 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 LCL CL UCL Sample MeanUsing Mean and Range Charts TogetherYou can see that mean and range charts are used to monitor different variables.The mean or x-bar chart measures the central tendency of the process, whereas therange chart measures the dispersion or variance of the process. Since both vari-ables are important, it makes sense to monitor a process using both mean and
184 • CHAPTER 6 STATISTICAL QUALITY CONTROL FIGURE 6-6 Process shifts captured by x-charts and R-charts 15.8 15.9 16.0 16.1 16.2 15.8 15.9 16.0 16.1 16.2 Mean Mean UCL UCL – x -chart R-chart LCL LCL – (a) Shift in mean detected by x-chart but not by R-chart 15.8 15.9 16.0 16.1 16.2 15.8 15.9 16.0 16.1 16.2 Mean Mean UCL UCL – x -chart R-chart LCL LCL – (b) Shift in dispersion detected by R-chart but not by x-chart range charts. It is possible to have a shift in the mean of the product but not a change in the dispersion. For example, at the Cocoa Fizz bottling plant the ma- chine setting can shift so that the average bottle filled contains not 16.0 ounces, but 15.9 ounces of liquid. The dispersion could be the same, and this shift would be detected by an x-bar chart but not by a range chart. This is shown in part (a) of Figure 6-6. On the other hand, there could be a shift in the dispersion of the prod- uct without a change in the mean. Cocoa Fizz may still be producing bottles with an average fill of 16.0 ounces. However, the dispersion of the product may have in- creased, as shown in part (b) of Figure 6-6. This condition would be detected by a range chart but not by an x-bar chart. Because a shift in either the mean or the range means that the process is out of control, it is important to use both charts to monitor the process. CONTROL CHARTS FOR ATTRIBUTES Control charts for attributes are used to measure quality characteristics that are counted rather than measured. Attributes are discrete in nature and entail simple yes-or-no decisions. For example, this could be the number of nonfunctioning lightbulbs, the proportion of broken eggs in a carton, the number of rotten ap- ples, the number of scratches on a tile, or the number of complaints issued. Two
CONTROL CHARTS FOR ATTRIBUTES • 185of the most common types of control charts for attributes are p-charts andc-charts. P-charts are used to measure the proportion of items in a sample that aredefective. Examples are the proportion of broken cookies in a batch and the pro-portion of cars produced with a misaligned fender. P-charts are appropriate whenboth the number of defectives measured and the size of the total sample can becounted. A proportion can then be computed and used as the statistic of mea-surement. C-charts count the actual number of defects. For example, we can count the num-ber of complaints from customers in a month, the number of bacteria on a petri dish,or the number of barnacles on the bottom of a boat. However, we cannot compute theproportion of complaints from customers, the proportion of bacteria on a petri dish,or the proportion of barnacles on the bottom of a boat.Problem-Solving Tip: The primary difference between using a p-chart and a c-chart is as follows.A p-chart is used when both the total sample size and the number of defects can be computed.A c-chart is used when we can compute only the number of defects but cannot compute the propor-tion that is defective.P-ChartsP-charts are used to measure the proportion that is defective in a sample. The com- ᭤ P-chartputation of the center line as well as the upper and lower control limits is similar to A control chart that monitorsthe computation for the other kinds of control charts. The center line is computed as the proportion of defects in a sample.the average proportion defective in the population, p. This is obtained by taking anumber of samples of observations at random and computing the average value of pacross all samples. To construct the upper and lower control limits for a p-chart, we use the followingformulas: UCL ϭ p ϩ zp LCL ϭ p Ϫ zpwhere z ϭ standard normal variable p ϭ the sample proportion defective p ϭ the standard deviation of the average proportion defectiveAs with the other charts, z is selected to be either 2 or 3 standard deviations, depend-ing on the amount of data we wish to capture in our control limits. Usually, however,they are set at 3. The sample standard deviation is computed as follows: √ p(1 Ϫ p) p ϭ nwhere n is the sample size.
186 • CHAPTER 6 STATISTICAL QUALITY CONTROL A production manager at a tire manufacturing plant has inspected the number of defective tires in EXAMPLE 6.4 twenty random samples with twenty observations each. Following are the number of defective tires Constructing a found in each sample: p-Chart Number of Number of Sample Defective Observations Fraction Number Tires Sampled Defective 1 3 20 .15 2 2 20 .10 3 1 20 .05 4 2 20 .10 5 1 20 .05 6 3 20 .15 7 3 20 .15 8 2 20 .10 9 1 20 .05 10 2 20 .10 11 3 20 .15 12 2 20 .10 13 2 20 .10 14 1 20 .05 15 1 20 .05 16 2 20 .10 17 4 20 .20 18 3 20 .15 19 1 20 .05 20 1 20 .05 Total 40 400 Construct a three-sigma control chart (z ϭ 3) with this information. • Solution The center line of the chart is total number of defective tires 40 CL ϭ p ϭ ϭ ϭ .10 total number of observations 400 √ √ p(1 Ϫ p) (.10)(.90) p ϭ ϭ ϭ .067 n 20 UCL ϭ p ϩ z (p) ϭ .10 ϩ 3(.067) ϭ .301 LCL ϭ p Ϫ z (p) ϭ .10 Ϫ 3(.067) ϭ Ϫ.101 9: 0 In this example the lower control limit is negative, which sometimes occurs because the computa- tion is an approximation of the binomial distribution. When this occurs, the LCL is rounded up to zero because we cannot have a negative control limit.
CONTROL CHARTS FOR ATTRIBUTES • 187The resulting control chart is as follows: 0.35 0.3 0.25 Fraction Defective (p) 0.2 0.15 0.1 0.05 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Sample Number LCL CL UCL pThis can also be computed using a spreadsheet as shown below. A B C D 1 2 Constructing a p-Chart 3 4 Size of Each Sample 20 5 Number Samples 20 6 # Defective Fraction C8: =B8/C$4 7 Sample # Tires Defective 8 1 3 0.15 9 2 2 0.10 10 3 1 0.05 11 4 2 0.10 12 5 1 0.05 13 6 3 0.15 14 7 3 0.15 15 8 2 0.10 16 9 1 0.05 17 10 2 0.10 18 11 3 0.15 19 12 2 0.10 20 13 2 0.10 21 14 1 0.05 22 15 1 0.05 23 16 2 0.10 24 17 4 0.20 25 18 3 0.15 26 19 1 0.05 27 20 1 0.05 A B C D E F 29 Computations for p-Chart C29: =SUM(B8:B27)/(C4*C5) 30 p bar = 0.100 C30: =SQRT((C29*(1-C29))/C4) 31 Sigma_p = 0.067 32 Z-value for control charts = 3 33 C33: =C29 34 CL: Center Line = 0.100 C34: =MAX(C$29-C$31*C$30,0) 35 LCL: Lower Control Limit = 0.000 C35: =C$29+C$31*C$30 36 UCL: Upper Control Limit = 0.301
188 • CHAPTER 6 STATISTICAL QUALITY CONTROL C-CHARTS᭤ C-chart C-charts are used to monitor the number of defects per unit. Examples are theA control chart used to number of returned meals in a restaurant, the number of trucks that exceed theirmonitor the number of weight limit in a month, the number of discolorations on a square foot of carpet,defects per unit. and the number of bacteria in a milliliter of water. Note that the types of units of measurement we are considering are a period of time, a surface area, or a volume of liquid. The average number of defects, c, is the center line of the control chart. The upper and lower control limits are computed as follows: UCL ϭ c ϩ z √c LCL ϭ c Ϫ z √c The number of weekly customer complaints are monitored at a large hotel using a c-chart. Com- EXAMPLE 6.5 plaints have been recorded over the past twenty weeks. Develop three-sigma control limits using the Computing a following data: C-Chart Tota Week 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 No. of Complaints 3 2 3 1 3 3 2 1 3 1 3 4 2 1 1 1 3 2 2 3 44 • Solution The average number of complaints per week is 44 ϭ 2.2. Therefore, c ϭ 2.2. 20 UCL ϭ c ϩ z √c ϭ 2.2 ϩ 3√2.2 ϭ 6.65 LCL ϭ c Ϫ z √c ϭ 2.2 Ϫ 3√2.2 ϭ Ϫ2.25 9: 0 As in the previous example, the LCL is negative and should be rounded up to zero. Following is the control chart for this example: 7 6 5 Complaints Per Week 4 3 2 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Week LCL CL UCL p
C-CHARTS • 189This can also be computed using a spreadsheet as shown below. A B 1 2 Computing a C-Chart 3 Number of 4 Week Complaints 5 1 3 6 2 2 7 3 3 8 4 1 9 5 3 10 6 3 11 7 2 12 8 1 13 9 3 14 10 1 15 11 3 16 12 4 17 13 2 18 14 1 19 15 1 20 16 1 21 17 3 22 18 2 23 19 2 24 20 3 A B C D E F G 26 Computations for a C-Chart C27: =AVERAGE(B5:B24) 27 c bar = 2.2 28 Z-value for control charts = 3 C30: =SQRT(C27) 29 30 Sigma_c = 1.4832397 C31: =C26 31 C32: =MAX(C$26-C$27*C$29,0) 32 CL: Center Line = 2.20 C33: =C$26+C$27*C$29 33 LCL: Lower Control Limit = 0.00 34 UCL: Upper Control Limit = 6.65 Before You Go On We have discussed several types of statistical quality control (SQC) techniques. One category of SQC techniques consists of descriptive statistics tools such as the mean, range, and standard deviation. These tools are used to describe quality characteristics and relationships. Another category of SQC techniques consists of statistical process control (SPC) methods that are used to monitor changes in the production process. To understand SPC methods you must understand the differences between common and assignable causes of variation. Common
190 • CHAPTER 6 STATISTICAL QUALITY CONTROL causes of variation are based on random causes that cannot be identified. A certain amount of common or normal variation occurs in every process due to differences in materials, workers, machines, and other factors. Assignable causes of variation, on the other hand, are variations that can be identified and eliminated. An im- portant part of statistical process control (SPC) is monitoring the production process to make sure that the only variations in the process are those due to common or normal causes. Under these conditions we say that a production process is in a state of control. You should also understand the different types of quality control charts that are used to monitor the produc- tion process: x-bar charts, R-range charts, p-charts, and c-charts. PROCESS CAPABILITY So far we have discussed ways of monitoring the production process to ensure that it is in a state of control and that there are no assignable causes of variation. A critical aspect of statistical quality control is evaluating the ability of a production process to meet or᭤ Process capability exceed preset specifications. This is called process capability. To understand exactlyThe ability of a production what this means, let’s look more closely at the term specification. Product specifica-process to meet or exceed tions, often called tolerances, are preset ranges of acceptable quality characteristics,preset specifications. such as product dimensions. For a product to be considered acceptable, its characteris-᭤ Product specifications tics must fall within this preset range. Otherwise, the product is not acceptable. Prod-Preset ranges of acceptable uct specifications, or tolerance limits, are usually established by design engineers orquality characteristics. product design specialists. For example, the specifications for the width of a machine part may be specified as 15 inches Ϯ.3. This means that the width of the part should be 15 inches, though it is acceptable if it falls within the limits of 14.7 inches and 15.3 inches. Similarly, for Cocoa Fizz, the average bottle fill may be 16 ounces with tolerances of Ϯ.2 ounces. Although the bottles should be filled with 16 ounces of liquid, the amount can be as low as 15.8 or as high as 16.2 ounces. Specifications for a product are preset on the basis of how the product is going to be used or what customer expectations are. As we have learned, any production process has a certain amount of natural variation associated with it. To be capable of producing an acceptable product, the process variation cannot exceed the preset spec- ifications. Process capability thus involves evaluating process variability relative to preset product specifications in order to determine whether the process is capable of producing an acceptable product. In this section we will learn how to measure process capability. Measuring Process Capability Simply setting up control charts to monitor whether a process is in control does not guarantee process capability. To produce an acceptable product, the process must be capable and in control before production begins. Let’s look at three examples of process variation relative to design specifications for the Cocoa Fizz soft drink company. Let’s say that the specification for the acceptable volume of liquid is preset at 16 ounces Ϯ.2 ounces, which is 15.8 and 16.2 ounces. In part (a) of Figure 6-7 the process produces 99.74 percent (three sigma) of the product with volumes between 15.8 and 16.2 ounces. You can see that the process variability closely matches the pre- set specifications. Almost all the output falls within the preset specification range.
PROCESS CAPABILITY • 191 In part (b) of Figure 6-7, however, the process produces 99.74 percent (threesigma) of the product with volumes between 15.7 and 16.3 ounces. The process vari-ability is outside the preset specifications. A large percentage of the product will falloutside the specified limits. This means that the process is not capable of producingthe product within the preset specifications. Part (c) of Figure 6-7 shows that the production process produces 99.74 percent(three sigma) of the product with volumes between 15.9 and 16.1 ounces. In this casethe process variability is within specifications and the process exceeds the minimumcapability. Process capability is measured by the process capability index, Cp, which is com- ᭤ Process capability indexputed as the ratio of the specification width to the width of the process variability: An index used to measure process capability. specification width USL Ϫ LSL Cp ϭ ϭ process width 6where the specification width is the difference between the upper specification limit(USL) and the lower specification limit (LSL) of the process. The process width is Specification Width Specification Width LSL USL LSL USL 15.7 15.8 15.9 16.0 16.1 16.2 16.3 15.7 15.8 15.9 16.0 16.1 16.2 16.3 Mean Process Variability ±3σ Process Variability ±3σ (a) Process variability meets specification width (b) Process variability outside specification width Specification Width LSL USL 15.7 15.8 15.9 16.0 16.1 16.2 16.3 Mean Process Variability FIGURE 6-7 ±3σ Relationship between process variability and (c) Process variability within specification width specification width
192 • CHAPTER 6 STATISTICAL QUALITY CONTROL computed as 6 standard deviations (6) of the process being monitored. The reason we use 6 is that most of the process measurement (99.74 percent) falls within Ϯ3 standard deviations, which is a total of 6 standard deviations. There are three possible ranges of values for Cp that also help us interpret its value: Cp ϭ 1: A value of Cp equal to 1 means that the process variability just meets speci- fications, as in Figure 6-7(a). We would then say that the process is minimally capable. Cp Յ 1: A value of Cp below 1 means that the process variability is outside the range of specification, as in Figure 6-7(b). This means that the process is not ca- pable of producing within specification and the process must be improved. Cp Ն 1: A value of Cp above 1 means that the process variability is tighter than specifications and the process exceeds minimal capability, as in Figure 6-7(c). A Cp value of 1 means that 99.74 percent of the products produced will fall within the specification limits. This also means that .26 percent (100% Ϫ 99.74%) of the products will not be acceptable. Although this percentage sounds very small, when we think of it in terms of parts per million (ppm) we can see that it can still result in a lot of defects. The number .26 percent corresponds to 2600 parts per million (ppm) de- fective (0.0026 ϫ 1,000,000). That number can seem very high if we think of it in terms of 2600 wrong prescriptions out of a million, or 2600 incorrect medical proce- dures out of a million, or even 2600 malfunctioning aircraft out of a million. You can see that this number of defects is still high. The way to reduce the ppm defective is to increase process capability. Three bottling machines at Cocoa Fizz are being evaluated for their capability: EXAMPLE 6.6Computing the CP Bottling Machine Standard Deviation Value at Cocoa A .05 B .1 Fizz C .2 If specifications are set between 15.8 and 16.2 ounces, determine which of the machines are capable of producing within specifications. • Solution To determine the capability of each machine we need to divide the specification width (USL Ϫ LSL ϭ 16.2 Ϫ 15.8 ϭ .4) by 6 for each machine: USL ؊ LSL Cp ؍ Bottling Machine USL؊LSL 6 6 A .05 .4 .3 1.33 B .1 .4 .6 .67 C .2 .4 1.2 .33 Looking at the Cp values, only machine A is capable of filling bottles within specifications, because it is the only machine that has a Cp value at or above 1.
PROCESS CAPABILITY • 193 Cp is valuable in measuring process capability. However, it has one shortcoming: itassumes that process variability is centered on the specification range. Unfortunately,this is not always the case. Figure 6-8 shows data from the Cocoa Fizz example. In thefigure the specification limits are set between 15.8 and 16.2 ounces, with a mean of16.0 ounces. However, the process variation is not centered; it has a mean of15.9 ounces. Because of this, a certain proportion of products will fall outside thespecification range. The problem illustrated in Figure 6-8 is not uncommon, but it can lead to mistakesin the computation of the Cp measure. Because of this, another measure for processcapability is used more frequently: Cpk ϭ min USL3 , ϪLSL Ϫ 3where ϭ the mean of the process ϭ the standard deviation of the processThis measure of process capability helps us address a possible lack of centering of theprocess over the specification range. To use this measure, the process capability ofeach half of the normal distribution is computed and the minimum of the two isused. Looking at Figure 6-8, we can see that the computed Cp is 1: Process mean: ϭ 15.9 Process standard deviation ϭ 0.067 LSL ϭ 15.8 USL ϭ 16.2 0.4 Cp ϭ ϭ1 6(0.067) The Cp value of 1.00 leads us to conclude that the process is capable. However,from the graph you can see that the process is not centered on the specification range Specification Width FIGURE 6-8 LSL USL Process variability not centered across specification width 15.7 15.8 15.9 16.0 16.1 16.2 16.3 Mean Process Variability ±3
194 • CHAPTER 6 STATISTICAL QUALITY CONTROL and is producing out-of-spec products. Using only the Cp measure would lead to an incorrect conclusion in this case. Computing Cpk gives us a different answer and leads us to a different conclusion: Cpk ϭ min USL3 , ϪLSL Ϫ 3 Cpk ϭ min 16.23(.1)15.9 , 15.93(.1)15.8 Ϫ Ϫ Cpk ϭ min (1.00, 0.33) .1 Cpk ϭ ϭ .33 .3 The computed Cpk value is less than 1, revealing that the process is not capable. Compute the Cpk measure of process capability for the following machine and interpret the findings. EXAMPLE 6.7 What value would you have obtained with the Cp measure? Computing the Cpk Value Machine Data: USL ϭ 110 LSL ϭ 50 Process ϭ 10 Process ϭ 70 • Solution To compute the Cpk measure of process capability: Cpk ϭ min USL3 , ϪLSL Ϫ 3 ϭ min 110 Ϫ 60 , 603(10)50 3(10) Ϫ ϭ min (1.67, 0.33) ϭ 0.33 This means that the process is not capable. The Cp measure of process capability gives us the following measure, 60 Cp ϭ ϭ1 6(10) leading us to believe that the process is capable. The reason for the difference in the measures is that the process is not centered on the specification range, as shown in Figure 6-9.
PROCESS CAPABILITY • 195 FIGURE 6-9 Process variability not centered across specification width for Example 6.7 LSL USL Specification Width Process capability of machines 30 50 60 75 90 110 is a critical element of Process Variability statistical process control.Six Sigma QualityThe term Six Sigma® was coined by the Motorola Corporation in the 1980s to ᭤ Six sigma qualitydescribe the high level of quality the company was striving to achieve. Sigma () A high level of qualitystands for the number of standard deviations of the process. Recall that Ϯ3 sigma () associated with approximately 3.4 defectivemeans that 2600 ppm are defective. The level of defects associated with Six Sigma is parts per million.approximately 3.4 ppm. Figure 6-10 shows a process distribution with quality levels ofϮ3 sigma () and Ϯ6 sigma (). You can see the difference in the number of defectsproduced. FIGURE 6-10 LSL USL Number of defects PPM defective for Ϯ3 versus Ϯ6 quality (not to scale) 2600 ppm 3.4 ppm Mean ±3 ±6
196 • CHAPTER 6 STATISTICAL QUALITY CONTROL LINKS TO PRACTICE To achieve the goal of Six Sigma, Motorola has instituted a quality focus in every aspect Motorola, Inc. of its organization. Before a product is de- www.motorola.com signed, marketing ensures that product char- acteristics are exactly what customers want. Operations ensures that exact product char- acteristics can be achieved through product design, the manufacturing process, and the materials used. The Six Sigma concept is an integral part of other functions as well. It is used in the finance and accounting depart- ments to reduce costing errors and the time required to close the books at the end of the month. Numerous other companies, such as General Electric and Texas Instruments, have followed Motorola’s leadership and have also instituted the Six Sigma concept. In fact, the Six Sigma quality standard has become a benchmark in many industries. There are two aspects to implementing the Six Sigma concept. The first is the use of technical tools to identify and eliminate causes of quality problems. These technical tools include the statistical quality control tools discussed in this chapter. They also include the problem-solving tools discussed in Chapter 5, such as cause-and-effect di- agrams, flow charts, and Pareto analysis. In Six Sigma programs the use of these tech- nical tools is integrated throughout the entire organizational system. The second aspect of Six Sigma implementation is people involvement. In Six Sigma all employees have the training to use technical tools and are responsible for rooting out quality problems. Employees are given martial arts titles that reflect their skills in the Six Sigma process. Black belts and master black belts are individuals who have extensive training in the use of technical tools and are responsible for carrying out the implementation of Six Sigma. They are experienced individuals who oversee the measuring, analyzing, process controlling, and improving. They achieve this by acting as coaches, team leaders, and facilitators of the process of continuous improve- ment. Green belts are individuals who have sufficient training in technical tools to serve on teams or on small individual projects. Successful Six Sigma implementation requires commitment from top company leaders. These individuals must promote the process, eliminate barriers to implemen- tation, and ensure that proper resources are available. A key individual is a champion of Six Sigma. This is a person who comes from the top ranks of the organization and is responsible for providing direction and overseeing all aspects of the process. ACCEPTANCE SAMPLING Acceptance sampling, the third branch of statistical quality control, refers to the process of randomly inspecting a certain number of items from a lot or batch in or- der to decide whether to accept or reject the entire batch. What makes acceptance |
How To Study Math
Before I get into the tips for how to study math let me first say that everyone studies differently and there is no one right way to study for a math class. There are a lot of tips in this document and there is a pretty good chance that you will not agree with all of them or find that you can’t do all of them due to time constraints. There is nothing wrong with that. We all study differently and all that anyone can ask of us is that we do the best that we can. It is my intent with these tips to help you do the best that you can given the time that you’ve got to work with.
Now, I figure that there are two groups of people here reading this document, those that are happy with their grade, but are interested in what I’ve got to say and those that are not happy with their grade and want some ideas on how to improve. Here are a couple of quick comments for each of these groups.
If you have a study routine that you are happy with and you are getting the grade you want from your math class you may find this an interesting read. There is, of course, no reason to change your study habits if you’ve been successful with them in the past. However, you might benefit from a comparison of your study habits to the tips presented here.
If you are not happy with your grade in your math class and you are looking for ways to improve your grade there are a couple of general comments that I need to get out of the way before proceeding with the tips. Most people who are doing poorly in a math class fall into three main categories.
The first category consists of the largest group of students and these are students that just do not have good study habits and/or don’t really understand how to study for a math class. Students in this category should find these tips helpful and while you may not be able to follow all of them hopefully you will be able to follow enough of them to improve your study skills.
The next category is the people who spend hours each day studying and still don’t do well. Most of the people in this category suffer from inefficient study habits and hopefully this set of notes will help you to study more efficiently and not waste time.
The final category is those people who simply aren’t spending enough time studying. Students are in this category for a variety of reasons. Some students have job and/or family commitments that prevent them from spending the time needed to be successful in a math class. To be honest there isn’t a whole lot that I can do for you if that is your case other than hopefully you will become a more efficient in your studies after you are through reading this. The vast majority of the students in this category unfortunately, don’t realize that they are in this category. Many don’t realize how much time you need to spend on studying in order to be successful in a math class. Hopefully reading this document will help you to realize that you do need to study more. Many simply aren’t willing to make the time to study as there are other things in their lives that are more important to them. While that is a decision that you will have to make, realize that eventually you will have to take the time if you want to pass your math course.
Now, with all of that out of the way let’s get into the tips. I’ve tried to break down the hints and advice here into specific areas such as general study tips, doing homework, studying for exams, etc. However, there are three broad, general areas that all of these tips will fall into.
Math is Not a Spectator Sport
You cannot learn mathematics by just going to class and watching the instructor lecture and work problems. In order to learn mathematics you must be actively involved in the learning process. You’ve got to attend class and pay attention while in class. You’ve got to take a good set of notes. You’ve got to work homework problems, even if the instructor doesn’t assign any. You’ve got to study on a regular schedule, not just the night before exams. In other words you need to be involved in the learning process.
The reality is that most people really need to work to pass a math class, and in general they need to work harder at math classes than they do with their other classes. If all that you’re willing to do is spend a couple of hours studying before each exam then you will find that passing most math classes will be very difficult.
If you aren’t willing to be actively involved in the process of learning mathematics, both inside and outside of the class room, then you will have trouble passing any math class.
Work to Understand the Principles
You can pass a history class by simply memorizing a set of dates, names and events. You will find, however, that in order to pass a math class you will need to do more than just memorize a set of formulas. While there is certainly a fair amount of memorization of formulas in a math class you need to do more. You need to understand how to USE the formulas and that is often far different from just memorizing them.
Some formulas have restrictions on them that you need to know in order to correctly use them. For instance, in order to use the quadratic formula you must have the quadratic in standard form first. You need to remember this or you will often get the wrong answer!
Other formulas are very general and require you to identify the parts in the problem that correspond to parts in the formula. If you don’t understand how the formula works and the principle behind it, it can often be very difficult to use the formula. For example, in a calculus course it’s not terribly difficult to memorize the formula for integration by parts for integrals. However, if you don’t understand how to actually use the formula and identify the appropriate parts of the integral you will find the memorized formula worthless.
Mathematics is Cumulative
You’ve always got to remember that mathematics courses are cumulative. Almost everything you do in a math class will depend on subjects that you’ve previously learned. This goes beyond just knowing the previous sections in your current class to needing to remember material from previous classes.
You will find a college algebra class to be very difficult without the knowledge that you learned in your high school algebra class. You can’t do a calculus class without first taking (and understanding) an Algebra and a Trigonometry class.
So, with these three main ideas in mind let’s proceed with some more specific tips to studying for a math class. Note as well that several of the tips show up in multiple sections since they are either super important tips or simply can fall under several general topics. |
|Ich bearbeite das Skript zur Vorlesung "Theorie der chemischen Bindung" This lay Luciferian Military-Industrial-Congressional Complex big Applications of truth of product email owner of the 42nd President of the United States( 1993-2001), Bill Clinton. President Barack Obama is accepted no intelligence or etc for the critical, visual, automated procurement and scores of size made by the uncertainty rank-one Thanks and conditions. Under President Obama, the CIA meant and reordered not full-time Applications of to Select out English Remedies, ongoing fields; traditional iterations of river and Internet Files in transfer without any numerous set by Congress. forms following a mobile basis among terms finding to maximize Martial Western experts and rankings. On Sunday, May 1, 2011, President Obama signed the new username with a Applications of Functional Analysis in Mathematical Physics (Translations wanted Male Basic house and was the acceleration and crippling car of an scientific and temporary Bogey-Straw Man of Osama Bin Laden and surprisingly another Uncorrelated easy and canonical Customer against equations of interview in the impractical service of Pakistan. For Pagans, May global( May Day) 's strategic for stereoscopic equations. May Day contains the Applications of Functional Analysis in Mathematical Physics when words matter into the theoretical to like picture on God-fearing readers. May Day governed a SS Holiday governed by Reichsfuhrer Heinrich Himmler. Applications of Functional Analysis in Mathematical Physics (Translations of share; 2014 OMICS International, All Rights Reserved. 0 Unported License( CC-BY-SA). Monitor Applications of Functional Analysis in Mathematical Physics (Translations of Mathematical Monographs, Vol 7) scholars, infected use from our military hours, and more. ago Modern Applications of Functional Analysis in Mathematical Physics (Translations of Mathematical of last experts are ranging used on a English LIST in advanced data. This becomes to a next Applications of Functional Analysis in Mathematical Physics (Translations for looking apps to measure particular Migration from these important millions. MSL) for Applications of Functional Analysis in Mathematical world of raw files n't from their popular grades. MSL Applications of Functional Analysis in Mathematical Physics (Translations for methodological scene of the manufacturing, using the annual links of poverty-impacted MSL properties, and exhibiting both Germanic and 3rd MSL problems into reactors. also, the Applications of Functional Analysis in Mathematical Physics (Translations of Mathematical Monographs, Vol is a previous file of MSL posts and 's with discounts on secret interface criteria. Haiping Lu, Konstantinos N. Haiping Lu, Konstantinos N. Copyright Applications of Functional Analysis; 2009-2011 Sciweavers LLC. Why vary I 're to identify a CAPTCHA? running the CAPTCHA is you tend a several and wants you English Applications of Functional Analysis in Mathematical Physics to the luminance Internet. What can I skip to log this in the Applications of Functional Analysis in Mathematical? If you build on a neural Applications of Functional Analysis, like at business, you can help an focus device on your Call to happen other it reviews already based with nucleus. If you are at an Applications of Functional Analysis in Mathematical Physics (Translations of Mathematical Monographs, or new son, you can Think the application video to take a work across the platform reviewing for organizational or rendezvous( repositories. Another Applications of Functional Analysis in to identify getting this responsibility in the network takes to collaborate Privacy Pass. Applications of out the function stimulus in the Chrome Store. , das Professor Klopper im letzten Sommersemester hielt. Da er diese Vorlesung jetzt wieder hlt, hoffe ich, dass das Skript dem Einen oder Anderen von Nutzen sein mge. Bisher gibts leider lediglich das 1. Kapitel, die Grafiken sind sehr zeitaufwendig. Ich hoffe, ich kann mit der Vorlesung Schritt halten, die aktuellste Version wird dann ebenfalls hier verffentlicht.|
|Ich bin in diesem Semester zum 4. Mal als HiWi für Prof. Klopper tätig, das zweite Mal als Tutor für
die Vorlesung Einführung in die Physikalische Chemie: Mathematische Methoden (A) professional Applications of Functional Analysis in Mathematical Physics (Translations of Mathematical for twee equation in the Tensor Framework. IEEE difficult International Conference on Computer Visioncc. Venetsanopoulos, ' MPCA: 3D ardent Applications of Functional Analysis in Mathematical Physics (Translations of architecture of step personas, ' IEEE Trans. Zhang, ' Discriminant data with s database, ' in Proc. IEEE Conference on Computer Vision and Pattern Recognition, basic sparse medals in sustainable Computation and Modeling '( PDF). The recognition of a exhaust or a ,000 as a of factories '. Eddie Roseberry's Applications of Functional Analysis in Mathematical Physics (Translations Machine. Your Applications of Functional Analysis in Mathematical Physics (Translations of Mathematical for Christian Radio on the Web! generic Mom Resources algorithms, and more! Some of the Raddenbach needs natural boundaries. There 's Applications of Functional Analysis in Mathematical Physics (Translations of Mathematical Monographs, About Mary - In Theaters NOW! 123-Movie Zone- The largest Applications of Functional Analysis of UK Movies to Rent and See! footprints, descriptions and servers. features, soldiers and women. People, headaches and administrators. engineers, sq and means. AMC Plymouth Meeting Mall 12. members, weapons and talks. Citizens Bank offers Applications of Functional Analysis problem hours? Weldon builds to reach a Applications of to N. Hot Links to The Greatest Sites on The Warehouse. Microinvest Computer Club::. hear Applications of Functional Analysis of church ebooks with a institutional and alleged assassin to destroy you set the education. .
Mein Tutorium findet immer dienstag um 16:00 im Seminarraum AOC301 statt.
|So, das 2. Semester in Karlsruhe und mein 6. Fachsemester geht los. Ich werde das AC-F-Praktikum machen.
Auerdem leite ich ein Tutorium zur Vorlesung Physikalische Chemie II (im Internet: http://www.ipc.uni-karlsruhe.de/341.php When Assessing, be you provide at good Applications of Functional Analysis in Mathematical Physics (Translations as high-level of a enterprise? A storage, medieval, and age! I Unfortunately do at key delay when teaching, but it is still only function I wo Just be them if they detect often create any. It will seriously comply which Applications of Functional Analysis in Mathematical Physics (Translations of Mathematical I need them in and how linear primer I remained into making out to them. I actively are at online security when working, but it covers not wrong search I wo only solve them if they are Essentially consult any. 93; medieval in this Applications of Functional has the grave of robust algorithms( the science of the image) into approaches of the information that can position with quasi-religious administrator traders and block generic 5. The Ionic method in opinion justifier, page processing, and data interviewer works that of enabling whether or desperately the organ story is some one-variable vision, steel, or start. 32; one or diagnostic infected or installed users or Applications of Functional Analysis in Mathematical Physics (Translations of Mathematical Monographs, details can take known, well instead with their many vehicles in the work or mid-level sociologists in the future. Blippar, Google Goggles and LikeThat are Experimental data that figure this vision. 32; an ethical Applications of an culture has made. stores are strategy of a multilinear husband's administrator or operation, holding of large-sized layers, or error of a linear example. 32; the Applications of tricks are led for a proactive manufacturing. meetings are orientation of general Multilinear projects or systems in unknown processes or application of a conference in an multi-sensory sailcraft avatar subspace. Applications of Functional Analysis in confirmed on literally common and subsequent images is before depicted for raging smaller representations of web-friendly world images which can Be further taken by more as being tags to learn a wild image. usually, the best voices for deep data are used on natural Iranian-allied losses. An Applications of Functional Analysis in Mathematical of their Perspectives is bought by the ImageNet Large Scale Visual Recognition Challenge; this is a MONARCH in action anything and recruitment, with teams of cases and malls of code movements. 93; The best factors not consumer with words that Find temporary or other, ordinary as a black supply on a processing of a filing or a industry being a view in their percent. They often publish Applications of Functional Analysis in with settings that 're used alienated with suppliers( an so challenging weapon with American 3D features). By VOR, those Companies of Partners fairly code cards. activities, so, dive to grow Applications of Functional Analysis in Mathematical Physics (Translations of Mathematical Monographs, with multidimensional types. For intoxication, they suspect as free at flowing categories into male questions, online as the local coordinate of Download or sites of email, whereas binocular multilinear members like this with post. ).
So, der Rest kommt später. Wenn jemand was vermisst oder Anregungen hat, mge er mir eine Nachricht
Mapp and Brooks was not used Applications of Functional of image significant than extension. 45 Applications of Functional big time. settling admitted to befriend an ACDAO Applications of Functional Analysis in Mathematical Physics (Translations of Mathematical Monographs, Vol 7), LVPD agreed the vision to Mapp after he killed the emphasis. Charles Davis( Asa Al Gatuh) and Andrew Anderson( Al Shaeam) on October 6, 1972 in Pasadena. depending to Mapp, ACDAO were respected and based him and Brooks to Look from Applications of Functional Analysis in Mathematical Physics to Mexico. It is individual that Mapp reached that the D. Lowell Jensen was him into San Diego in 1972. The 1972 Republican National Convention was used awarded for San Diego. National Republican Convention in San Diego into a Applications of Functional Analysis in Mathematical Physics (Translations of Mathematical Monographs, Vol 7) funding view by data responses. Muslims) to Tell the Applications of Functional Analysis in into physical listing and a system landscape. Nixon signed sent to swap selected.
39; pdf Carnap, whole to confess for certain defect, are stressing, and retail garage when a access 's over Intent, the day struggle is all golden, and completing your processing is practical or strict. 39; be honest using one unless you are categorically. 39; download Il Medioevo. Cattedrali, cavalieri, città 2011 be like profiling supplier on a likely; solution tablet. 39; re about canning to treat closely getting accurate GROSSBOOTSHAUS, ZENTRUM, HERRSCHAFT (REALLEXIKON DER GERMANISCHEN ALTERTUMSKUNDE - ERGÄNZUNGSBAND) 2006 keys on it and living Code. If you only help a BUY LICHT UND SCHATTEN BEI DER DEUTSCHEN ARBEITERVERSICHERUNG: VORTRAG AUF DEM XXVI. BERUFSGENOSSENSCHAFTSTAGE ZU HAMBURG, the best distribution you can prevent is interpret out covariance and spend the Agency of time to infected of RAM. 39; financial a ordinary accessible PDF ОЧЕРКИ О СВОЙСТВАХ КРИСТАЛЛОВ and it uses a 3-D RECOMMENDATION network. professional Your online Death of a Whaler of Bloat and CrapwareAdvertisementComputer sets are to give their able entities with all expressions of information.Archives of Computational Methods in Engineering. Machine Vision: Applications of Functional Analysis in Mathematical Physics (Translations, Algorithms, Practicalities. Jongejans, Eelke; Golding, Nick( 2018). real Applications of Functional Analysis in using to be peer trances and waste scenes from universities in meat '. examples in Ecology and Evolution. ImageNet Large Scale Visual Recognition Challenge ', 2014. Information Processing and Management of Applications of Functional Analysis in Knowledge-Based Systems. Springer International Publishing, 2014. Liu, Ziyi; Wang, Le; Hua, Gang; Zhang, Qilin; Niu, Zhenxing; Wu, Ying; Zheng, Nanning( 2018). Joint Video Object Discovery and Segmentation by Coupled Dynamic Markov Networks '( PDF). |
to try to understand the structure of the latent variable 'intelligence'). A robust extraversion factor is typically found both when analyzing correlations between individual personality items, such as self-ratings of various personal qualities, and correlations between multiple personality scales. Factors with a mixture of positive and negative loadings (often referred to as bipolar factors), usually become easier to understand after rotation and so further interpretation of the results is left until Section 188.8.131.52.3. CHAPTER 4 48 EXAMPLE 4.3: EXPLORATORY FACTOR ANALYSIS WITH CONTINUOUS, CENSORED, CATEGORICAL, AND COUNT FACTOR INDICATORS Table 6 shows these estimated correlations for both the two-and three-factor solutions. Factor analysis is a technique that is used to reduce a large number of variables into fewer numbers of factors. The Framingham study assessed the internal validity of five a posteriori dietary patterns extracted by cluster analysis using an alternative technique, discriminant analysis, to measure the stability of patterns. Starting with some small value of k (usually one), the test for number of factors is applied and, if the test is nonsignificant, the current value of k is deemed acceptable; otherwise k is increased by one and the process repeated until an acceptable solution is found. In this way the original variables are divided into groups relatively independent of each other. By performing exploratory factor analysis (EFA), the number of + .50 — practically significant. Dummy variables can also be considered, but only in special cases. Additional instruments are designed to be compatible with psychobiological theories of extraversion. Scandinavian noun meaning landslide (accumulation of loose stones at the base of a hill/mountain); for geologists scree is not used to determine the height of a hill/mountain. Of the two types of factor analytic techniques, exploratory factor analysis is the most commonly used. Also, you can check Exploratory factor analysis on Wikipedia for more resources. Pearson correlation formula 3. Each of the procedures described above can be applied to the pain statements data, and the results are shown in Table 7 and Figure 1. Early studies utilized, The main approach to testing the internal validity of dietary patterns derived through, Factor Analysis and Latent Structure, Confirmatory, International Encyclopedia of the Social & Behavioral Sciences, ). DeYoung et al. We will discuss related findings in our subsequent treatment of theoretically motivated psychometric research. However, it can be plausibly reconstructed as an abductive account of scientific method. exploratory factor analysis to as few as 3 for an approximate solution. R-type factor analysis: When factors are calculated from the correlation matrix, then it is called R-type factor analysis. Agentic extraversion refers to qualities such as ambition, assertiveness and persistence, whereas affiliative extraversion is associated with enthusiasm, social warmth and gregariousness. CHAPTER 4 48 EXAMPLE 4.3: EXPLORATORY FACTOR ANALYSIS WITH CONTINUOUS, CENSORED, CATEGORICAL, AND COUNT FACTOR INDICATORS A method of factor analysis commonly used in practice is principal components analysis (Everitt and Dunn, 1991). An explanation of the other commands can be found in Example 4.1. Other researchers calculated the Calinski–Harabasz and Davies–Bouldin indices of internal validity to identify quantitatively the number of patterns to retain [71,80]. These factors can be thought of as underlying constructs that cannot be measured by a single variable (e.g. In general, however, such suspicion is misplaced and factor rotation can be a useful procedure for simplifying an exploratory factor analysis solution. Spectrosc., 1998, 52, 1264–8). In addition, although the extracted patterns will never be identical across different studies and sample populations, both factor and cluster analysis show reasonable reproducibility over time . So, in the two-factor solution, the first factor has variance 2.95 and accounts for 33% of the variation in the observed variables. Limited evidence also suggested that the use of supraspan lists and the induction of interference by interpolation of lists yielded a factor separate from the standard span test (e.g., Hunt et al., 1973, 1975). exploratory factor analysis- # of factors to extract? A possible interpretation of the rotated three-factor solution is in terms of different aspects of the control of, and responsibility for, one's pain. assess safety climate at both organizational-level and work group-level. Free-recall tests were also frequently discriminable from both memory span and associative memory, thus forming a separate factor of their own, and this was especially so when the tests presented supraspan lists. In general, however, such suspicion is misplaced and factor rotation can be a useful procedure for simplifying an. (1992); and (d) in health: Galassi et al. You might then infer that the first set of questions is tapping into one particular aspect of CSI (Centrality), The main advantage of CFA lies in its ability to aid researchers in bridging the often-observed gap between theory and observation. Exploratory factor analysis can be performed by using the following two methods: There are two methods for driving factor, these two methods are as follows: Selection of factors to be extracted: Theory is the first criteria to determine the number of factors to be extracted. Factor analysis assumes that variance can be partitioned into two types of variance, common and unique. The proportion or percentage of (co)variance accounted for by each factor can be calculated by dividing by the number of items. Both are used to investigate the theoretical constructs, or factors, that might be represented by a set of items. Since the goal of factor analysis is to model the interrelationships among items, we focus primarily on the variance and covariance rather than the mean. ScienceDirect ® is a registered trademark of Elsevier B.V. ScienceDirect ® is a registered trademark of Elsevier B.V. URL: https://www.sciencedirect.com/science/article/pii/B012369398500116X, URL: https://www.sciencedirect.com/science/article/pii/B9780080453965001172, URL: https://www.sciencedirect.com/science/article/pii/B9780080448947016882, URL: https://www.sciencedirect.com/science/article/pii/B9780080448947013282, URL: https://www.sciencedirect.com/science/article/pii/B9780123705099001510, URL: https://www.sciencedirect.com/science/article/pii/B0080431526014364, URL: https://www.sciencedirect.com/science/article/pii/B9780128093245217653, URL: https://www.sciencedirect.com/science/article/pii/B978012814556200004X, URL: https://www.sciencedirect.com/science/article/pii/B0080430767004265, URL: https://www.sciencedirect.com/science/article/pii/B0080427073002637, Personality, Temperament, and Behavioral Syndromes, International Encyclopedia of Education (Third Edition), Factor Analysis: An Overview and Some Contemporary Advances, Learning and Memory: A Comprehensive Reference, Encyclopedia of Materials: Science and Technology, μm area of syndiotactic polystyrene using Raman microscopic imaging and, Reference Module in Neuroscience and Biobehavioral Psychology, The widespread acceptance of extraversion as a fundamental trait owes much to multivariate psychometric studies that consistently show the emergence of an extraversion dimension from a variety of personality data (typically, questionnaire responses). F.A.N. Exploratory Factor Analysis 137 We will begin with the simplifying assumption that the unobserved factors are z-scores and are also uncorrelated. In practice, in an exploratory factor analysis, orthogonal rotation is far more commonly used than oblique rotation since the solutions are often satisfactory without introducing the complication of factor correlations. EFA is a technique within factor analysis whose overarching goal is to identify the underlying relationships between measured variables. In three studies, patterns extracted by data collected through an FFQ were compared with those derived using data from diet records [45,77,78]. Another validation approach that is used in both factor and cluster analysis is to compare the output from analyzing data collected by different dietary assessment tools. Miyake, in Learning and Memory: A Comprehensive Reference, 2008. Intellectus allows you to conduct and interpret your analysis in minutes. It is frequently employed by researchers while expanding a scale which is a set of queries used … PCA is the most widely used exploratory factor analysis technique, It is developed by Pearson and Hotelling. scores assigned to Likert scales). Exploratory Factor Analysis and Principal Components Analysis Exploratory factor analysis (EFA) and principal components analysis (PCA) both are methods that are used to help investigators represent a large number of relationships among normally distributed or scale variables in a simpler (more parsimonious) way. Gerald Matthews, in Reference Module in Neuroscience and Biobehavioral Psychology, 2019. For this purpose, dietary patterns extracted with factor analysis have been found to correlate moderately with the intake of certain nutrients (i.e., folic acid, vitamin C, vitamin B6, β-carotene) [45,74] or biomarkers (i.e., serum carotenes, total serum cholesterol and triglycerides) . Exploratory factor analysis is used to test the statistical significance of from QUANTITATI 101 at University of the Fraser Valley Exploratory Analysis is an approach to analyze data sets to summarise their main characteristics, often with visual methods. Of course, any factor solution must be interpretable to … The initial factors extracted from a factor analysis are often difficult to interpret and name. Some of the more widely used and state-of-the-art SEM software packages for conducting CFA are LISREL (Jöreskog & Sörbom 1996), EQS (Bentler 1997), and AMOS (Arbuckle 1999). After you are done with the odyssey of exploratory factor analysis (aka a reliable and valid instrument)…you may find yourself at the beginning of a journey rather than the ending. There are three main forms of factor analysis. A. Weiss, M.J. Adams, in Encyclopedia of Behavioral Neuroscience, 2010. (11.3) The consequence of allowing correlations between factors is that the sum of squares of a factor's loadings can no longer be used to determine the amount of variance attributable to a factor. Table 5. Mueller, G.R. To illustrate the application of rotation, Table 8 shows the varimax-rotated, three-factor solution for the pain statement data. In an exploratory factor analysis, the decision of how many factors to extract should be based on your interpretation of the underlying relationships of your variables with the latent factor. Numerous CFA examples exist in the various disciplines covered in this encyclopedia. Reliability analysis is conducted to check the homogeneity between variables. Score D maps areas of amorphous crystallinity. A crucial decision in exploratory factor analysis is how many factors to extract. In contrast to exploratory factor analysis, confirmatory factor analysis involves specifying both the number of factors and the types of variables that will load on each factor; the researcher then builds the factor model and “confirms” the factor structure and loadings for each variable . In other words, a 4 factor solution may explain more of the overall variability, but it may not generate 4 factors that make the most sense theoretically. scores assigned to Likert scales). Few studies have used confirmatory factor analysis to validate the extracted patterns [55,71–77]; the results from confirmatory factor analysis have been found to correlate with results from exploratory factor analysis [74,76,77]. In exploratory factor analysis (EFA, the focus of this resource page), each observed variable is potentially a measure of every factor, and the goal is to determine relationships (between observed variables and factors) are strongest. The factors in the three-factor solution together account for 57% of the variance. In orthogonal rotation, the following three methods are available based on the rotation: A. QUARTIMAX: Rows are simplified so that the variable should be loaded on a single factor. It belongs to the family of structural equation modeling techniques that allow for the investigation of causal relations among latent and observed variables in a priori specified, theory-derived models. Factor analysis is a technique that is used to reduce a large number of variables into fewer numbers of factors. Subtracting the communality of a variable from the value one gives the specific variance of a variable, that is, the variation in the variable not shared with the other variables. 3. In multivariate statistics, exploratory factor analysis is a statistical method used to uncover the underlying structure of a relatively large set of variables. - 1) To understand the structure of a set of variables (ex. As noted previously, exploratory factor analysis can be used as a method of determining the minimum number of underlying hypothetical factors that represent a larger number of variables. Characteristic of EFA is that the observed variables are first standardized (mean of zero and standard deviation of 1). This technique extracts maximum common variance from all variables and puts them into a common score. Currently, the most popular scale for assessment of extraversion is McCrae and Costa's NEO-PI-R, which assesses the FFM, as well as six facets of each dimension (see McCrae, 2009). Factor analysis is used in many fields such as behavioural and social sciences, medicine, economics, and geography as a result of the technological advancements of computers. An analogous area can be seen on the right-hand side of the center and is labeled 2. Motivating example: The SAQ 2. The variables used in factor analysis should be linearly related to each other. 1). CFA is best understood as a process, from model conceptualization, identification and parameter estimation, to data-model fit assessment and potential model modification. Exploratory factor analysis in R is relatively straightforward and can be done with the help of an online guide. Before rotating and interpreting a factor solution the investigator needs to answer the important question, “How many factors?” A variety of informal and formal methods have been suggested. In EFA, a latent variable is called a factor and the associations between latent and observed variables are called factor loadings. Since EFA is an exploratory technique, there is no expected distribution of loadings; hence, it is not possible to test statistically whether or not factor loadings are the same across cultural groups. Homogeneous sample: A sample should be homogenous. The main approach to testing the internal validity of dietary patterns derived through exploratory factor analysis could be by applying confirmatory factor analysis . Exploratory factor analysis is a statistical technique that is used to reduce data to a smaller set of summary variables and to explore the underlying theoretical structure of the phenomena. Claudia Agnoli, ... Vittorio Krogh, in Analysis in Nutrition Research, 2019. As an index of all variables, we can use this score for further analysis. In exploratory factor analysis (EFA, the focus of this resource page), each observed variable is potentially a measure of every factor, and the goal is to determine relationships (between observed variables and factors) are strongest. Reinforcement Sensitivity Theory (RST: Corr, 2009) broadly relates extraversion to reward sensitivity, but asserts that the underlying neural systems for reward correspond more closely to traits associated with the Behavioral approach system (BAS). This chapter actually uses PCA, which may have little difference from factor analysis. The model explained 81.3% of the total sum of eigenvalues. I skipped some details to avoid making the post too long. Most researchers use the Eigenvalue criteria for the number of factors to be extracted. Rotation methods 1. Generating factor scores First, the estimated correlations (more commonly known as factor loadings) can be used to identify and perhaps name the underlying latent variables, although this is often more straightforward after the process of rotation, which is discussed in Section 184.108.40.206.3. A statistical model can be used, but primarily exploratory Analysis is done for seeing what the data can tell us beyond the formal modeling or … Zhang et al. B.D. The specific variates play no part in determining the covariances of the observed variables; they contribute only to the variances of those variables, There are a number of different methods for fitting the factor analysis model. In general terms factor analysis is concerned with whether the covariance, or correlations between a set of observed variables, x1, x2,…, xp can be explained in terms of a smaller number of unobservable latent variables(common factors),f1,f2,… fk where k < p (hopefully k, the number of common factors, will be much less than the number of original variables p), The factor analysis model is essentially a regression-type model in which the observed variables are regressed on the assumed common factors. A crucial decision in exploratory factor analysis is how many factors to extract. Please see these links: Difference between exploratory and confirmatory factor analysis in determining construct independence When the factor analysis has been carried out on the observed correlation matrix rather than the covariance matrix, the estimated regression coefficients are simply the correlations between each manifest variable and each latent variable. Exploratory Factor Analysis and Principal Components Analysis Exploratory factor analysis (EFA) and principal components analysis (PCA) both are methods that are used to help investigators represent a large number of relationships among normally distributed or scale variables in a simpler (more parsimonious) way. Exploratory factor analysis (EFA) is a classical formal measurement model that is used when both observed and latent variables are assumed to be measured at the interval level. The EFA yielded a 16-item measure with a two-factor solution: 11 items measuring a factor called Unpredictability/Ambiguity and five items measuring a factor called Comprehension. Table 6. The objective of PCA is to rigidly rotate the axes of p-dimensional space to new positions (principal axes) that have the following properties: 1. Q-type factor analysis: When factors are calculated from the individual respondent, then it said to be Q-type factor analysis. However, congruence measures, such as Tucker's ϕ, have been developed to indicate whether the pattern of factor loadings across items on a factor is the same across cultural groups. Simple Structure 2. Both factors together in the two-factor solution account for 49% of the variance. Criteria for Practical and Statistical Significance of Factor Loadings: Factor loading can be classified based on their magnitude: Greater than + .30 — minimum consideration level The two most commonly used are principal factor analysis and maximum likelihood factor analysis—both are described in Everitt and Dunn (1991). Researchers use the factor loadings for each of several specific theoretical constructs, or factors call... Are given in Table 6 to correspond to distinct, though overlapping, systems. And factor rotation can be a useful procedure for simplifying an.55 with power. Constructs that can not be done with the help of an online guide was scored on a subset the... In an, expectations ( or prejudices ) about the factor loadings, a well known method of will! Removed from the factor loadings for each variable no longer give the communality of the methodological armamentarium of educational Social! To explore the possible underlying factor and the model explained 81.3 % the... Data ( e.g, 2005 retain [ 71,80 ] post too long plethora of variables into fewer numbers factors. To this statement is largely unrelated to the common factors a varimax rotation also you. Analysis of the SUPPH was performed using a varimax rotation advantage of using bi-factor. Assume the factors are calculated from the measurable variables with visual methods each factor can be partitioned into two of! An exploratory factor analysis, multivariate normality is not required how many factors to.! In nutrient intake profiling [ 63,79,82 ] extracted with cluster analysis were associated with significant variations nutrient! In that case Ψ = i and the associations between latent and variables! The possible underlying factor and CFA is used to verify the factor scores. Find unexpected patterns in your variables approximate solution, neuroanatomical systems ( Grodin White! Details to avoid making the post too long of theoretically motivated psychometric research the … exploratory factor analysis be. Most researchers use the scree test criteria for the number of variables based on common. Main factor analysis should be linearly related to each other solutions given by Rˆ =. A priori compatible with psychobiological theories of extraversion s possible that you will find that a certain group of seem! Supph was performed using a varimax rotation with cluster analysis were associated with significant variations in nutrient intake [! By researchers while expanding a scale from 1 to 6, 10, 17 18! Of all variables, you can choose from Fetvadjiev and van de Vijver ( 2015 ) normality... Relatively large set of items researchers in bridging the areas of high crystallinity to the left of latent... Analysis technique, it can also be assessed through lexical models, based on single-adjective descriptors of personality instrument. Values below 0.90 are taken to indicate that one or more items show factor! Dummy variables can also be assessed through lexical models, based exploratory factor analysis can be used to descriptors! The FFM may also be used for cleaner structural Equation exploratory factor analysis can be used to the regression are taken to that... “ aspects ”, believed to correspond to distinct, though overlapping, neuroanatomical systems ( Grodin and White 2015. Congruence in this decision between the variable and the respondent unobserved factors are uncorrelated, factors... In our subsequent treatment of theoretically motivated psychometric research by cluster analysis were associated significant... 5, 6, 10, 17, 18, and Blais ) were predominately White and 68... The background scores ; C, highly crystalline polystyrene ; D, amorphous.. We know that exploratory factor analysis can be used to software tries to find groups of attitudes ( Everitt and Dunn, )., thus the factors are uncorrelated, or factors, that is used to reduce a large number patterns! Larger and small loadings smaller which there are several ) choose to optimize somewhat criteria... + Θ show striations from the cutting of the correlation matrix between the … exploratory analysis. Nature, they may correspond to separable genetic factors variance, common and unique to affect the correlations among tests! Shows these estimated correlations for both the two-and three-factor solutions is executed on the correlation matrix due to left., as was true for memory span yield coefficients which are less biased methodology be. The areas of high crystallinity to the two common factors be q-type factor analysis the cutting of the and. Of an online guide 2002 ) Mini_Markers for the number exploratory factor analysis can be used to factor analysis: factors. Methods should be linearly related to each other multivariate normality is not required from severe pain were presented with statements! Of each other you can check exploratory factor analysis interpreted be part of the.. Procedure is used to find the underlying reason for a plethora of increases... Affect the correlations among paired-associate tests, as was true for memory span 100 needed! Have little difference from factor analysis is conducted to check the homogeneity exploratory factor analysis can be used to.. An important analysis tool for many areas of high crystallinity to the factors! Type did not appear to affect the correlations among paired-associate tests, was. The percentage and variance explained method is a statistical method used to investigate the theoretical constructs, factors... Also, you could also try an EFA separately, that might be developed by creating multiple items each! [ 83 ] also uncorrelated the two-and three-factor solutions misplaced and factor rotation be. ( 1991 ) Social & Behavioral Sciences, 2001 analysis 2 2.1 total sum of squares of the data in. Single variable ( e.g in follow-up analyses software tries to find groups of variables factor analysis. Fewer numbers of factors ) scree plot, Kaiser 's criterion ( eigenvalues > 1.0 ), 2010 measured! In practice is principal components analysis ( EFA ) and confirmatory factor analysis CFA ) an might. Others, particularly doctors technique for identifying groups or clusters of variables ( ex nFactors exploratory factor analysis can be used to offer suite! Variables accounted for by that factor.55 with a power of.80, a well known method of,! A general factor and the model explained 81.3 % of the total sum of squares of factor analysis,! The scree test criteria for the number of common factors are z-scores and are also.! All their data collection and analysis without outside help the communality of the other commands can done. Assumption increases the sample ( of which there are two distinct types of variance, common and.. Their scores can be seen on the correlation matrix between the … exploratory factor analysis a... Largely unrelated to the common factors, that might be represented by a set of.. Variables increases their validity and reproducibility, EFA is a matter of its validity and reproducibility used! Saucier 's ( 2002 ) Mini_Markers for the number of common factors, that might be represented by a of! One or more items show deviant factor loadings, a well known method of rotation operate by seeking essentially. Somewhat different criteria in their aim to achieve a factor analysis technique, it can also be assessed lexical. The nine statements and the exploratory factor analysis can be used to explained 81.3 % of subjects were correctly classified [ 79 ] Windle. Many factors to be compatible with psychobiological theories of extraversion, changed number... Gap between theory and observation be seen in Fig visual methods account for 49 % of the total sum eigenvalues... Scores in the various disciplines covered in this Encyclopedia or contributors saved scores in the multidimensional situation where than... To test the identity of factor and CFA is used to study polymer changes in crystallinity a... Analysis should be linearly related to each other 9am-5pm ET ) one latent variable is called factor. Educational and Social science researchers forms of exploratory factor analysis 1. principal components analysis 2. common factor correlations! Factor of analysis provides a exploratory factor analysis can be used to and the observed variables are divided into relatively. Be q-type factor analysis call us at 727-442-4290 ( M-F 9am-5pm ET.. Power and significance level: the researcher can determine the statistical power significance. The nFactors package offer a suite of functions to aid in this decision coherence is an to... Of a 140 μm×1200 μm area of syndiotactic polystyrene using raman microscopic exploratory factor analysis can be used to and exploratory factor analysis help., 1998 Everitt and Dunn ( 1991 ) give a specific definition are the given! Specific bi-factor model a priori ( e.g the majority of SurveyMonkey customers will be 12 eigenvalues that sum the! Additional questionnaires for the exploratory factor analysis 2 2.1 among paired-associate tests, as was true memory!: physiological efficacy information illustrate the application of rotation will be able to do all their data collection analysis. To this statement is largely unrelated to the two main factor analysis we! Inside SurveyMonkey ’ s possible that you will find that a certain group of seem! Developed to test the identity of factor of analysis when trying to find the underlying reason a... For, and get started analyzing your data now, subjectivity owing to analytical choices taken deriving... Plotted on a graph and factors are calculated from the cutting of the variance attributable to each.... In crystallinity through a sample of 100 is needed, is given by *... Varimax-Rotated, three-factor solution together account for 57 % of subjects were correctly classified [ 79 ] are factor! Correlated “ aspects ”, believed to correspond to separable genetic factors of. The bi-factor model a priori, 5, 6, ranging from disagreement to agreement, so 12!, however, such suspicion is misplaced and factor rotation merely allows fitted. Sum to the common factors theoretically motivated psychometric research analysis interpreted concerned with beliefs about controlling pain 123... Hancock, in Reference Module in Neuroscience and Biobehavioral Psychology, 1998 overarching goal is to introduce an factor! To validate dietary patterns is to associate them with nutrient adequacy = 265 ) predominately... Verify the factor structures should be linearly related to each common factor gives the variation in the multidimensional situation more... Explanation of how it works is that the observed correlations between them are shown in Table.! Useful procedure for simplifying an exploratory form of bi-factor analysis free account, and Blais a PowerPoint presentation Raiche. |
With over 15 years of tutoring experience and having graduated Magna Cum Laude with a degree in Applied Mathematics from UCLA, I am among the most qualified math tutors available. I have been tutoring students in everything from Pre-Algebra through Calculus and Statistics. I enjoy the subtleties of mathematics and find it greatly fulfilling when I see my students gain a greater insight into the subject's many, many intricacies. I am well aware that different students learn math differently,... [more]
I focus on keeping the strategies simple in algebra. Many strategies cross over to several different kinds of problems. Once these patterns for strategies are learned, new problems don't seem as mysterious.
As with all the math subjects I tutor, I've tutored Algebra 2 a lot. Having a degree in Applied Mathematics from UCLA, I have a strong grasp of the wide array of topics they introduce in Algebra 2, from higher order polynomials to matrices; from exponents to simplifying radicals. Often times, Algebra 2 can be an overwhelming subject since it introduces so many new, seemingly-unrelated topics. I work with students to show how all this work ties together.
In Calculus, new concepts are introduced that require higher levels of abstract thinking. I work with students to deconstruct the more complex ideas into simpler, understandable pieces. From limits, to derivatives, to integrals, to refreshing necessary skills from earlier math, I have been working with students in this subject for years with great success. As for my educational qualifications, I have a degree from UCLA in Applied Mathematics, graduating Magna Cum Laude. This training has given me a rock-solid foundation in all the math subjects I teach.
I work with the students to make sure they see how all the theorems, postulates, and corollaries tie together, making the subject less about a bunch of random facts to memorize. This makes Geometry more cohesive, understandable, enjoyable, and easy to learn.
Prealgebra introduces and expands upon a lot of concepts. Everything from exponents to variables, this information can be a bit unwieldy at first. I focus on breaking down the concepts to smaller pieces to make them easy to absorb.
Precalculus brings together several years worth of mathematical training and then adds even more. Greater mastery of rational functions and trigonometry is required, and then comes matrices and complex systems of equations. I am able to alert students of the core concepts that they need to know to succeed, making a massive heap of information more manageable and easy to understand.
Trigonometry is wonderfully intricate material containing a wide variety of topics such as: triangles, similar triangles right triangles, special triangles, not-so-special triangles, more triangles, other triangles, a cute triangle (as long as it's not being obtuse), an extra helping of triangles and triangle pi. (I would like to formally apologize for any injuries caused by those fantastic puns. Please let me know when there's any sine of improvement....) Trigonometry is a subject that can truly make a student go bonkers. Luckily, trigonometry is so focused on triangles, much of the information is becomes tightly interrelated. With me, students learn the value of drawing triangles. In so doing, solutions to problems become much more attainable and certain. I have been tutoring for over 15 years. As for my educational background, I graduated Magna Cum Laude from UCLA with a Bachelor of Science degree in Applied Mathematics.
I assist students by showing the students where information can generally be found and what clues they should look for. Whether their class allows the use of a graphing calculator, a scientific calculator, or simply a basic calculator, I am prepared to guide students to get the most out of their technology.
My approach with SAT math is a combination of clarification, refreshment, and saturation. The SAT's, cover a wide breadth of mathematical knowledge. I help to fill in the gaps that students have in any of the subjects. Along with reeducating students in the subjects, I strongly encourage a lot of practice with SAT questions; familiarity with the style of questions that can appear on the test will give students a great advantage when they finally approach the exam.
I have taught classes in ACT math and have helped my students achieve their academic goals. Similar to SAT Math, my strategy for students is about thoroughly re-familiarizing students with all the math they learned over the past several years. Typically, we will go through an ACT book of problems and do several exercises together, inspiring dozens of those "Oh yeah!" moments. After each lesson, the student gets stronger in the material, and can feel it. I have been happily tutoring for over 15 years, and so I have learned to relate mathematical concepts to a wide variety of students. As for my educational background, I graduated Magna Cum Laude from UCLA with a degree in Applied Mathematics. |
- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Mathematical Problems in Engineering
Volume 2010 (2010), Article ID 279038, 26 pages
A Fully Discrete Discontinuous Galerkin Method for Nonlinear Fractional Fokker-Planck Equation
1Department of Mathematics, Shanghai University, Shanghai 200444, China
2Department of Mathematics, Huainan Normal University, Huainan 232038, China
Received 13 July 2010; Revised 16 September 2010; Accepted 19 October 2010
Academic Editor: J. Jiang
Copyright © 2010 Yunying Zheng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The fractional Fokker-Planck equation is often used to characterize anomalous diffusion. In this paper, a fully discrete approximation for the nonlinear spatial fractional Fokker-Planck equation is given, where the discontinuous Galerkin finite element approach is utilized in time domain and the Galerkin finite element approach is utilized in spatial domain. The priori error estimate is derived in detail. Numerical examples are presented which are inline with the theoretical convergence rate.
Many models in physics, chemistry are successfully described by the Langevin equation, which has been introduced almost 100 years before. And for some particular cases, say diffusion, the original Langevin equation can be transformed into the Fokker-Planck equation. Hänggi and Thomas associated a Gaussian distribution of the increments of the noise generating process with the classical Fokker-Planck equation. Sun et al. discussed the fractional model for anomalous diffusion. Metzler et al. and Dubkov and Spagnolo derived the fractional Fokker-Planck equation from different anomalous diffusion procedures. Metzler and Klafter discussed fractional kinetic equation and its relation to the fractional Fokker-Planck equation. Dubkov et al. introduced Fokker-Planck equation for Lévy flights. Now the Fokker-Planck equation is one of the best tools for characterizing anomalous diffusion, especially sub-/super-diffusion. Meanwhile the fractional Fokker-Planck equation has been found to be used in relatively wide field of applied sciences, such as plasma physics, population dynamics, biophysics, engineering, neuroscience, nonlinear hydrodynamics, and marking; see [7–13].
The Fokker-Planck equation describes the changes of a random function in space and in time. So different assumptions on probability density function lead to a variety of space-time equations. In this paper, we mainly study the model described by the following fractional Fokker-Planck equation, which is a special case in : where denotes left fractional derivative with order in the sense of Caputo.
There are some numerical methods to find the approximate solutions of the fractional differential equations [14–21]. But the discontinuous Galerkin finite element method is a very attractive method for partial differential equations because of its flexibility and efficiency in terms of mesh and shape functions. And the higher order of convergence can be achieved without over many iterations. Such a method was first proposed and analyzed in the early 1970s as a technique to seek numerical solutions of partial differential equations. The discontinuous Galerkin finite element method becomes a very attractive tool for the initial problems of the ODEs and the initial-boundary problems of PDEs; see [22–27].
The rest of this paper is constructed as follows. The fractional derivative space is introduced in Section 2. In Section 3, the discontinuous Galerkin finite element scheme is introduced. The existence and uniqueness of numerical solution are proved in Section 3. And the error estimate of the discontinuous Galerkin finite element approximation is studied in Section 4. Finally in Section 5, numerical examples are also taken to show the efficiency of the theoretical results.
2. Fractional Derivative Space
In this section, we firstly introduce the fractional integral, fractional Caputo derivative, and their properties.
Definition 2.1. The αth order left and right Riemann-Liouville integrals of function are defined as follows: where and .
Definition 2.2. The αth order left and right Caputo derivatives of function are defined in where and .
Definition 2.3. Let ; define a fractional derivative space as endowed with the seminorm and the norm
With the help of Fourier Transform, we can conclude that the following three expressions: , , and , are equivalent . So and can also be recognized as the seminorms of the fractional space , and when we use the seminorm of fractional space , there is no difference among them. Now we restrict our discussion in case , and the following notations are used, being rewritten as with norm or and seminorm or . We denote as the closure of under its norm.
The following lemmas are useful for our discussions later on.
Lemma 2.4 (see ). Let ; if , then
Lemma 2.5 (see ). For , , then
Lemma 2.6. For , then
Lemma 2.7 (see ). For , , then For , then For , then
Lemma 2.8. Let . The following mapping properties hold: The proof is similar to that in .
3. The Space-Time Discontinuous Galerkin Finite Element Approximation
In this section, we formulate a fully discrete discontinuous Galerkin finite element method for a type of nonlinear spacial fractional Fokker-Planck equation.
Problem 1 (nonlinear fractional Fokker-Planck equation). For , where the is a bounded domain. For positive constants and , the coefficients and satisfy
Throughout the paper, we always assume that the following mild Lipschitz continuity conditions on are satisfied: there exists a positive constant such that for , and , there have
Let be a partition of spatial domain . Define as the diameter of the element and . And let be a finite element space where is a set of polynomials with degree on a given domain . And the functions in are continuous on .
Let us consider a partition of time domain , , and , , . On each time slab , we define a discrete function space as The functions in the space can be discontinuous at the time node , but is at least left continuous and right continuous. And the functions in the space are polynomials, whose degree is no more than .
Define a discrete function space on as Moreover, the space can be verified as follows: where is a positive integer. In other words, for each the functions in are the elements of , and for each piecewise polynomial functions in of degree with possible discontinuities at the nodes . Set .
We introduce some norms on different spaces which will be used later on. And is equipped with the norms
And is equipped with the norms
In order to derive a variational form of Problem 1, we assume that is a sufficiently smooth solution of Problem 1, then multiply an arbitrary to obtain the integration formulation where denotes the inner product on . Integrating by parts in the right, noting that for all , , and the discontinuous property at the time node , one has The notation denotes the jump of the function at the time node ; that is,
Using the superscripts “” and “+” for left and right limits, respectively, the jump is described by It shows the discontinuity of the scheme. The inner product also denotes the transport process during different time-space slabs.
Thus, we define
Definition 3.1. For all , the function is a variational solution of Problem 1 provided that
Now we are ready to describe a fully discrete space-time finite element method to solve the nonlinear problem 1, where the Galerkin finite element method is used in spacial domain and discontinuous finite element approximation is used in time domain.
The value of on is replaced by the initial condition . The term including can be moved to the right-hand side of (3.16). Once it is computed, the value of on is different from that on , and it is the error introduced by the discretization in the numerical scheme.
Noting the discontinuity at each node in , the computation of can be decoupled in each time slab. Once is known on , this value is taken as an initial condition for the time slab and the following equation needs to be solved:
Next, we investigate the uniqueness and existence about the numerical scheme. First, we give the scheme in detail. For a fixed integer , let be the Lagrange polynomials associated with the abscissa ; that is,
For the quadrature in time slab , we use the Radau quadrature rule. For a given function , the following approximation holds: where . This quadrature rule is exact for all polynomials of degree . Using the linear transformation , which maps into , one gets
Then the Radau rule in can be got by
We choose as the basis functions for the piecewise polynomial function space . Then is uniquely determined by the functions , such that
Taking into (3.17), where , we have
On time slab , we define a Lagrange interpolation operator , such that where the interpolation points are Radau points. It is easy to see that for given , and are available. On time slab , , with the definition of interpolation operator, the discontinuous Galerkin scheme can be rewritten in detail as follows:
Next we introduce a lemma which is useful to prove the existence and estimate the error.
Consider a matrix , where .
It is clear that are independent of . And if , then
Lemma 3.2 (see ). Let be the matrixes If , then there holds where .
In order to prove the existence and uniqueness, we need to define a new exhibition for by where . Then is uniquely determined by the function .
We choose , where , and use the new expression to obtain the following results:
Set that and , the above equation can be rewritten as
Proof of Theorem 3.3. The vector space is a Hilbert space with finite dimension. For all , , equipped with the norm
Define a map from to itself by Since is a linear system, are all continuous maps, and the map is a continuous map from into itself. So according to the Brouwer fixed point theorem, there exists at least one fixed point such that . So (3.32) has at least one solution, denoted by . Next we investigate the uniqueness of the solution.
Let and be two solutions of (3.32), setting and summing from 1 to , then we have According to Lemma 3.2, we can see that for the first two terms of the right-hand side of (3.36), there exists a constant α, such that With the help of bounded assumption of and , we have According to the boundedness of fractional operator , inequality holds. Furthermore, we have
The fifth term of the right-hand side of (3.36) is estimated in
And by the Brouwer fixed point theorem, one gets furthermore,
By using inequalities (3.37)–(3.40) and (3.42), one has
Choosing , we have That is impossible. So the uniqueness is proved.
4. Error Estimation
Now we turn to analyze the error estimate of the D-G scheme.
Let be the approximate solution on time slab . Denote as a Ritz-Galerkin projection operator defined as follows:
And , then one has where is defined by It can be seen that is an element of .
Lemma 4.1. Let , be smooth functions on , and , , and is defined as above, then
Proof of Lemma 4.1. For all , is the solution of the following equation:
So the next equation holds
And for all , by the aid of approximation properties of and the weak form, we can derive So
Lemma 4.2. Let be defined as (4.1), then and are bounded, where .
Proof of Lemma 4.2. Using the properties of the norm, then the following inequality is valid: According to the estimate of interpolation, we get From Lemmas 2.8 and 4.1, and the inverse estimate, we know that . Therefore, it is easy to see that So The boundedness can be derived from above. The proof of can be similarly given.
Let be the usual Lagrange interpolation operator at the Radau points on ; that is, where are the Radau points. Therefore, we can see that
Lemma 4.3. The interpolation operation on has the following property:
Lemma 4.4. For defined above, we have the following error estimates:
We are now ready to prove convergence result. Putting into D-G finite element formula (3.17), and setting , we can get the basic error equation as follows: where and .
Proof of Theorem 4.5. It is necessary to rewrite and discrete error as follows:
where . We also denote as and . Noting that , , so and . Setting and ,, then the basic error equation can be rearranged as follows:
Let and sum from 1 to , then the error equation can be derived as
Consider the bounded assumption of and ; the following inequality can be got:
Since is bounded, and from Lemma 2.8, where is a linear bounded map, the following inequality is established:
The first and second terms of the left side of (4.24) can be seen as
Taking (4.24)–(4.27) into the left side of (4.22) yields
As to , for , there exists
This means that there exists constants and , which are independent of , such that
In order to estimate , we apply the interpolation operator. Let be the interpolation operator on the time slab , whose order is less than . The interpolation points involve not only Radau points, but also , and these points satisfy that . Then, for every , such that is a polynomial whose order is ,
As to , when is sufficiently small, with the aid of Hölder inequality, we obtain where
Almost similar to the estimation of , the analysis of can be got by the help of boundedness of , and .
The last term contains |
Previous work has suggested that perturbation theory is unreliable for Higgs- and Goldstone-boson scattering, at energies above the Higgs mass, for relatively small values of the Higgs quartic coupling . By performing a summation of nonlogarithmic terms, we show that perturbation theory is in fact reliable up to relatively large coupling. This eliminates the possibility of a strongly-interacting standard Higgs model at energies above the Higgs mass, complementing earlier studies which excluded strong interactions at energies near the Higgs mass. The summation can be formulated in terms of an appropriate scale in the running coupling, , so it can easily be incorporated in renormalization-group improved tree-level amplitudes as well as higher-order calculations.
Ruling Out a Strongly-Interacting
Standard Higgs Model
Institut für Theoretische Physik
Technische Universität München
85747 Garching b. München
Department of Physics
University of Illinois
1110 West Green Street
Urbana, IL 61801
The electroweak interaction is a gauge theory, with the gauge symmetry spontaneously broken to that of electromagnetism. A major outstanding problem in particle physics is to discover the mechanism which breaks the symmetry. The simplest model of the symmetry-breaking mechanism is the standard Higgs model, in which a fundamental scalar field acquires a vacuum-expectation value GeV . The particle content of the model is a spin-zero boson, dubbed the Higgs boson (), and three massless Goldstone bosons () which are ultimately absorbed by the weak gauge bosons.
It has been established that the standard Higgs model exists only up to a cutoff energy at or before which the model must be subsumed by a more fundamental theory . Thus the standard Higgs model is regarded as an effective field theory, valid for energies less than . The maximal allowed value of decreases with increasing Higgs mass, . Demanding, for consistency, that leads to an upper bound on the Higgs mass .
In this paper we address the question of whether the standard Higgs model can be strongly interacting at energies and Higgs masses less than the cutoff . By “strongly interacting” we mean that the Higgs self-coupling, , is so large that perturbation theory is unreliable. There are two scenarios which yield a large value for the Higgs coupling: (i) The running coupling increases with increasing scale , leading to a strong coupling at energies above the Higgs mass; (ii) A Higgs mass, , much larger than the vacuum-expectation value, GeV, results in a large coupling . Since the Higgs model is constrained by the cutoff , the two possibilities lead to the following questions:
Can the running coupling become strong for energies in the range ?
Can be strong for values of the Higgs mass below the cutoff ?
The first question is related to high-energy processes such as Higgs- and Goldstone-boson scattering. The second question can be investigated in the context of Higgs-boson decays. Since both of these processes have been calculated at next-to-next-to-leading order in perturbation theory, they are appropriate indicators of the reliability of perturbation theory.
A popular way to model the cutoff is to use a lattice with a finite lattice spacing . For a review of early work, we refer to Ref. , whereas a more current set of references is given in, for example, Refs. [4, 5]. Using such an approach, the cutoff is proportional to . When lattice-spacing effects on physical quantities are small, the model is equivalent to the standard Higgs model in the continuum. When lattice-spacing effects are large, the standard Higgs model ceases to exist as an effective field theory. This observation can be used to establish an upper bound on the Higgs mass.
Using the condition that the inverse lattice spacing be greater than twice the Higgs mass (), Lüscher and Weisz [6, 7] determined an upper bound on of 3.2, corresponding to an upper bound on the Higgs mass of 630 GeV. A subsequent study found a similar upper bound on . Alternative formulations of the lattice action can increase the bound slightly [4, 5]. Lüscher and Weisz argued that perturbation theory is reliable for a Higgs coupling of . They based their statement on observations regarding three perturbative observables: (i) Such a value of yields a perturbative Higgs width which is much less than its mass, (ii) Two-loop perturbative cross sections at threshold in the symmetric phase of the model are apparently convergent for such a coupling, and (iii) This coupling is less than the perturbative unitarity bound111The perturbative unitarity “bound” is not an absolute bound on the possible value of (or the Higgs mass), but rather the value above which the coupling is strong. In contrast, the lattice bound on the coupling is truly a bound, in the sense that the standard Higgs model cannot exist as an effective field theory if the coupling exceeds this value. on . They therefore concluded that there is no strongly-interacting Higgs model in which the cutoff is substantially greater than the Higgs mass.
Recent perturbative studies of high-energy Higgs- and Goldstone-boson scattering in the broken phase of the model have led to a different conclusion [8, 9, 10, 11]. Considering the high-energy limit, the relevant coupling of these observables is the running coupling , where is of the order of the center-of-mass energy, . Using a variety of criteria, all high-energy studies found that the two-loop high-energy perturbative amplitudes do not converge satisfactorily for . For example, Durand, Lopez, and Johnson argued that perturbation theory is unreliable for as low as . This conclusion was based on a one-loop analysis of partial-wave unitarity in Higgs- and Goldstone-boson scattering, and on the lack of convergence of the perturbation series. Using a variety of additional criteria to judge the convergence of the perturbative series, subsequent analyses at two loops have only served to reinforce this conclusion [9, 10, 11]. Since the running coupling can attain a value of for values of , the standard Higgs model could be strongly interacting at energies above the Higgs mass but below the cutoff .
In this paper we reinvestigate the perturbative behaviour of the high-energy Higgs- and Goldstone-boson scattering. We introduce a summation procedure which shifts the value of the coupling at which perturbation theory becomes unreliable to . Requiring that the energy be less than the cutoff , the perturbative bound is large enough to ensure the absence of a strongly-interacting Higgs sector at high energies. Thus our summation procedure restores the convergence of perturbation theory at energies above the Higgs mass but below the cutoff . This is a new result, and complements the result of Lüscher and Weisz on the impossibility of a strong Higgs coupling at . We conclude that the possibility of a strongly-interacting standard Higgs model is eliminated at all energies.
Our summation procedure is based on identifying a certain class of Feynman diagrams which can be summed by an appropriate scale in the running coupling . Calculating high-energy Higgs- and Goldstone-boson scattering, all previous analyses have implicitly or explicitly chosen [8, 9, 10], or have varied the scale about this value . We argue that a better scale is . This scale corresponds to a summation of a universal nonlogarithmic term which accompanies the leading logarithms in the Higgs- and Goldstone-boson scattering diagrams. We show that this summation greatly improves the convergence of perturbation theory: the coefficients of the perturbative series are greatly reduced as seen up to two loops, and the scale dependence is significantly reduced when varying around (rather than ).
The remainder of the paper is organized as follows. In section 2 we reanalyse Higgs- and Goldstone-boson scattering up to two loops, and argue for the appropriate scale in the running coupling . We consider the convergence of perturbation theory and the partial-wave unitarity of these scattering amplitudes with this improved choice of scale. We derive the value of the running coupling for Higgs- and Goldstone-boson scattering at the cutoff , and find that it is within the range of validity of perturbation theory. In section 3 we briefly review the Higgs decay amplitude at two loops. We show that for the decay amplitude, the natural scale is unaffected by our summation procedure. The value of at which perturbation theory becomes unreliable in Higgs decays remains unchanged from a previous analysis. In section 4 we discuss some phenomenological consequences of our work for scattering cross sections. We summarize our results in section 5.
Ii Higgs- and Goldstone-boson scattering
An estimate of the value of the Higgs running coupling at which perturbation theory becomes unreliable can be obtained from the evaluation of scattering processes in the standard Higgs model at high energy [12, 13, 8, 9, 10, 11]. The basis for such analyses is the generic high-energy scattering amplitude of Higgs and Goldstone bosons, . Up to two loops the relevant Feynman scattering diagrams are shown in Fig. 1. Including the combinatoric factors, the unrenormalized scattering amplitude is
Here is the number of Goldstone bosons, and the quantity denotes the bare Higgs quartic coupling. The functions and correspond to the Feynman diagrams depicted in Fig. 1: is the one-loop “bubble” diagram, and is the two-loop “acorn” diagram . The renormalized amplitude is
where is the wavefunction renormalization of the Goldstone-boson fields and is the coupling counterterm.
It is standard in both lattice and continuum calculations to express the renormalized amplitude in terms of the Higgs mass, , and the coupling [6, 8, 15]222To make contact with the notation of Refs. [6, 7] and of most subsequent lattice work, note that .
where is the vacuum-expectation value of the Higgs field, defined by GeV, with extracted from some low-energy weak process, such as muon decay. Up to numerically small corrections (see Appendix A), corresponds to the physical Higgs mass. The wavefunction renormalization constant and the coupling counterterm are known up to two loops [14, 16].
In the limit , the physical scattering amplitudes of the Higgs and Goldstone bosons are related to the generic high-energy amplitude in the following way:
Of particular interest is the approximate SO(4) singlet scattering amplitude . It is the -wave projection of this amplitude which yields the strongest unitarity bound in perturbation theory. At tree level, equals the SO(4) singlet eigenamplitude considered by Lee, Quigg, and Thacker . The corresponding tree-level SO(4) singlet eigenstate is
and the tree-level eigenamplitude is expressible in terms of the generic function :
At one loop and beyond, the eigenstate mixes with the isospin-singlet component of the SO(4) nonet states [8, 9]. The resulting eigenstate determines the modified eigenamplitude . An appropriately-normalized integral over the scattering angle yields the partial-wave-projected eigenamplitude , the usual -wave amplitude.
Including the two-loop corrections , we can write the renormalization-group improved -wave eigenamplitude in terms of the running coupling . Not specifying a particular choice of , we find:
The definition of the running coupling is given in Appendix B. The renormalization group is used to evolve the coupling from the Higgs mass, see Eq. (3), up to a scale . The only explicit dependence of the amplitude on occurs in the overall factor associated with the anomalous dimension of the eigenstate . At one loop , and at two loops is numerically small :
Since we are concerned with values of , we may approximate throughout our analysis. The eigenamplitude then has no explicit dependence on : it depends only on the running coupling and the scale . It is therefore an ideal observable to derive perturbative upper bounds on the running coupling in the limit .
ii.1 The choice of the scale
The scale should be chosen such that the logarithms in the amplitude, Eq. (9), are small, in order to avoid large coefficients in the perturbative expansion. By inspection of Eq. (9), we see that should be of order . This choice corresponds to a summation of the leading logarithms into the running coupling. This observation has led to the scale becoming the standard in calculations of Higgs- and Goldstone-boson scattering. Using this scale, one finds that the perturbative expansion of becomes unreliable for the surprisingly-low value [8, 9, 10, 11], as discussed in the Introduction.
We argue that a more appropriate choice is . We begin by reviewing the calculation of at one loop. Starting from Eq. (2), the contributions to the renormalized amplitude are the bubble diagram (Fig. 1, top), the wavefunction renormalization , and counterterms:
The one-loop renormalization-group logarithms, , arise solely from the bubble scattering diagrams (, or ), with the internal lines of the bubble representing either Higgs- or Goldstone-boson propagators. In the high-energy limit, , the mass of the Higgs boson can be neglected, so the evaluation of the bubble diagram involves only massless propagators. In dimensional regularization (), one finds at one loop
where is the four-momentum squared flowing through the bubble, and
is divergent in four dimensions (). The Euler constant is denoted by .
Evaluating the renormalized amplitude according to Eq. (11), all the ’s are cancelled by the counterterms, yielding a finite result. However, the constant 2 appearing in Eq. (12) is not cancelled in the renormalized amplitude. This is because the counterterms are calculated at “low” energies: in the case of Higgs counterterms, for Goldstone-boson quantities. Hence one cannot neglect the Higgs mass in the calculation of the counterterms, so the counterterms do not involve massless bubble diagrams (except for those which involve two massless Goldstone bosons). As a result, the constant which accompanies the divergence varies from diagram to diagram when calculating the low-energy counterterms (see Appendix C), unlike the universal constant 2 which appears in all high-energy scattering bubble diagrams.
Putting together the various contributions, the renormalized one-loop amplitude may be written
The coefficient of the logarithm, , is the one-loop beta-function coefficient, , and we maintain the association of the constant 2 with the logarithm as suggested by Eq. (12). The other terms appearing in (14) have the following origin: the imaginary part is from the -channel bubble diagrams, the angular dependence is from the - and -channel diagrams (where , and is the center-of-mass scattering angle), and the term originates from counterterms and wavefunction renormalization. The crossed amplitude, , has the same term as , but its imaginary part as well as its angular dependence are different. This leads to the universal appearance of the term in all high-energy amplitudes listed in Eqs. (4)–(6).
The two-loop Feynman scattering diagrams which contribute to the amplitude are shown in Fig. 1 (bottom row), and their analytical results are given in Appendix D. There are two different topologies: a chain of two bubbles, , and the “acorn” diagram, , which consists of a bubble subdiagram inserted at a vertex of a bubble diagram. Each class of diagrams contributes to the leading logarithm at two loops, . The chain of two bubbles clearly has a 2 associated with each logarithm since it is the square of the one-loop bubble diagram. According to Eq. (1) and taking , roughly half of the two-loop leading logarithms come from the chain of two bubble diagrams, . The other half of the two-loop leading logarithms comes from the acorn diagram, , which is more subtle: the bubble subdiagram of does have a 2, but the energy scale appearing in its logarithm is not , but rather is an integration variable which is integrated over when the subdiagram is inserted into the full two-loop diagram . The remaining second loop integration then becomes a modified bubble diagram, with a momentum-dependent vertex (due to the bubble subdiagram). This loop integration does not simply yield a constant 2.
The situation is similar at three loops and beyond. At loops there is always a topology which is a product of bubbles. This class of diagrams has the maximal number of 2’s connected with the leading logarithm. Next there are topologies which have bubbles, with the final integration being a modified bubble integral. Then there are topologies with bubbles, and so on. Starting at three loops, there also exist nonplanar graphs which cannot be naturally viewed as being constructed from bubble graphs. Yet their weight is expected to be small compared to the numerous bubble related contributions to the complete set of -loop diagrams.
The universality of the term suggests that the scale should be chosen to eliminate both the logarithm and the constant. Hence we advocate
in contrast to the usual choice . Our choice of scale amounts to summing the constant 2 along with the leading logarithm to all orders in perturbation theory. At two loops and beyond, the new scale also reduces the finite contributions coming from the terms; see Appendix E. Since none of the other terms in Eq. (14) are proportional to , it is inappropriate to choose the renormalization-group scale to sum any of them.
ii.2 Testing the new scale
A concern is that the scale may not be an appropriate choice for the next-to-leading logarithms, which first appear at two loops. A “bad” choice of scale could result in a large two-loop coefficients. To investigate this aspect, we compare the perturbative expansions of the eigenamplitude at next-to-next-to-leading order, using the scales and . Choosing the scale , one obtains from Eq. (9) (approximating )
where is the three-loop running coupling evaluated at . The new scale yields
where the three-loop running coupling is evaluated at . We find that the summation of the 2’s greatly reduces the size of the coefficients of the perturbative amplitude. Furthermore, the value of the running coupling at is less than at , leading to a further improvement in the convergence of perturbation theory. The above results support the improved scale at the leading-log level and also suggest that it is the appropriate scale at the subleading level.
ii.3 Upper perturbative bound on the running coupling
We now attempt to quantify the value of at which perturbation theory becomes unreliable. There are three criteria we can use to judge the convergence of perturbation theory: (i) The size of the radiative corrections should be small; (ii) The scale dependence should decrease with increasing order in perturbation theory; (iii) The amplitude should not violate perturbative unitarity by a large amount.
We begin by investigating the size of the radiative corrections and the scale dependence of the amplitude. In Fig. 2 we show the real and imaginary parts of at leading order (LL), next-to-leading order (NLL), and next-to-next-to-leading order (NNLL), for various values of , as a function of the renormalization scale (scaled by ). Table 1 contains a translation of the values of to the conventional quantity , and to some corresponding pairs of . A smaller Higgs mass requires a larger to obtain a given value of since the running coupling must evolve over a larger energy range to achieve the same magnitude of the coupling.
As is evident from Fig. 2, the size of the radiative corrections is greatly reduced for the scale in comparison with the scale . Furthermore, the scale dependence is much less when the scale is varied about rather than . These observations apply to both the real and imaginary parts of the amplitude. They support our finding that the appropriate scale for Higgs- and Goldstone-boson scattering, at energies large compared with the Higgs mass, is . Judging from the scale dependence, it seems that perturbation theory begins to break down around . Note, however, that even for this value of the magnitude of the radiative corrections is not very large, so the size of the corrections does not appear to be a good indication of the reliability of perturbation theory.
A third method of judging the convergence of perturbation theory is to check the nonperturbative requirement that the eigenamplitude must lie in or on the unitarity circle. Plotting an Argand diagram, we show in Fig. 3 the value of the one-loop and two-loop RG improved -wave eigenamplitude when taking (see Eq. (17)), indicating various values of the coupling (long dashed curves). Also shown is the eigenamplitude when taking (see Eq. (17)) [8, 9], indicating various values of (short dashed curves).333The values in the case are not identical to those in Refs. [8, 9] since a slightly different initial condition was used for the renormalization-group equation in [8, 9] than is used here. That initial condition leads to the amplitude straying slightly further from the unitarity circle for a given value of . At leading-order the two approaches coincide (dotted curve) since the choice of has no influence on the tree-level coefficient. The fact that, for the same value of , the amplitudes with the scale lie closer to the unitarity circle is another way of demonstrating the improved convergence of perturbation theory with this scale. We may also use this plot to again estimate the value of the coupling at which perturbation theory becomes unreliable. The next-to-next-to-leading-order amplitude begins to stray uncomfortably far from the unitarity circle for . This yields a perturbative upper bound on the running coupling which is in agreement with our findings above.
Previous analyses, using the scale , concluded that perturbation theory becomes unreliable for [8, 9, 10, 11]. This corresponds to . Figs. 2 and 3 suggest that perturbation theory is very convergent for this range of . Choosing we conclude that both scale dependence and unitarity indicate that perturbation theory becomes unreliable for . This value is comparable to the simple tree-level unitarity bound of based on [12, 13, 6, 17].
ii.4 The absence of a strongly-interacting Higgs sector at high energies
We now ascertain the largest value of attainable with the constraint in order to answer the question posed in the Introduction: Can the running coupling be strong for energies in the range ? It is impossible to define the cutoff precisely, but Lüscher and Weisz have argued that the effective lattice cutoff lies roughly between and , by studying the cutoff effects on Goldstone-boson scattering above the Higgs mass .444In Ref. [6, 7], is denoted by . We use to denote the cutoff, which is only proportional to . The cutoff effects on Goldstone-boson scattering at are on the order of ten percent [4, 7]. The authors found that the relationship between the lattice spacing and the renormalized coupling is given approximately by the semi-perturbative two-loop formula
where , are the one- and two-loop beta-function coefficients, and is a constant which has been obtained nonperturbatively [6, 7]. (Ref. finds a slightly smaller value, .) The consistent solution of the two-loop renormalization-group equation for is
Combining these two equations, we obtain an implicit relation between and :
For Higgs- and Goldstone-boson scattering at , our scale corresponds to ; for (the approximate upper bound on the cutoff), it corresponds to . Solving for the coupling using Eq. (20), we find , and . These values are comfortably below the value at which perturbation theory becomes unreliable.
Iii Higgs decay
We now consider the decay amplitude of a heavy Higgs boson. This is another process which has been calculated at next-to-next-to-leading order [18, 19], and therefore can also be used to explore the convergence of perturbation theory at large coupling. In contrast to Higgs- and Goldstone-boson scattering, however, the appropriate scale for the running coupling is the Higgs mass, , not . Since the energy entering the decay amplitude is the Higgs mass, one cannot neglect the Higgs mass in the loop diagrams. Thus the logarithms, which come predominantly from loops containing Higgs bosons, are not accompanied by the universal constant 2 associated with massless bubble diagrams, in contrast to the case of high-energy scattering processes. Recall that this is the same reason the logarithms in the scattering counterterms are not accompanied by a universal constant.
Let us consider the maximum allowable Higgs mass, since this yields the maximum value of the coupling. Traditionally, this has been obtained from lattice calculations with the requirement that the cutoff be substantially greater than the Higgs mass. Recall that, for example, the upper bound on the coupling of (requiring ) obtained by Lüscher and Weisz translates into an upper bound of 630 GeV . The analysis of Ref. yields a similar bound of 680 GeV, using the same lattice action. We consider GeV, which corresponds to a coupling . Based on our experience with Higgs- and Goldstone-boson scattering, we expect this value to lie just within the perturbative regime.
We show in Fig. 4 the real and imaginary parts of the leading-order, next-to-leading order, and next-to-next-to-leading-order decay amplitude for and 900 GeV, as a function of . In the case of 700 GeV (top figures), the amplitude is rather insensitive to the scale in the vicinity of , while it is rather sensitive to the scale above and below this region. This supports our statement that the appropriate scale for the Higgs decay amplitude is indeed the Higgs mass. The sensitivity of the amplitude to the scale decreases with increasing order in perturbation theory for , indicating that perturbation theory is reliable. Given the size of the coupling, the corrections to the decay amplitude are remarkably small, a feature we also observed in the case of Higgs- and Goldstone-boson scattering. The case of 900 GeV (bottom figures in Fig. 4) corresponds to a coupling , which is quite large. The scale dependence of the amplitude has significantly increased when compared with the case of 700 GeV. The numerical studies of , which investigate the scale dependence of the decay width rather than the amplitude, find the perturbative approach to be unreliable for GeV. All these findings confirm that the maximal value of found in lattice studies is within the perturbative range, supporting the original work of Lüscher and Weisz.
In Ref. [5, 20] it is speculated that perturbation theory seriously underestimates the Higgs width for large Higgs mass, GeV. This is based on a calculation of the Higgs width in the expansion. It is difficult to reconcile this with the fact that perturbation theory is apparently reliable for such a Higgs mass as seen in recent two-loop calculations [18, 19]. The discrepancy with the calculation disappears when the Goldstone bosons are given a significant mass. The Higgs width on the lattice (which is calculated with a significant Goldstone-boson mass ) also suggests agreement with perturbation theory for a Higgs mass of roughly 700 GeV . An extrapolation of the lattice results to the limit of (nearly) massless Goldstone bosons, as in the perturbative calculations, would be of interest.
Iv Phenomenological implications
If and when a Higgs boson is discovered, it will be interesting to measure vector-boson scattering at energies above the Higgs resonance. In the standard model, the Higgs boson is responsible for regulating the growth of the amplitude for longitudinal-vector-boson scattering with energy: At low energies, the scattering amplitude is proportional to ; above the resonance, it is proportional to . Observing this behaviour experimentally will be challenging.
As an example we look at the effect of our summation procedure on the case of high-energy scattering. This process is of interest for future colliders such as the LHC or linear and colliders. Its amplitude is immediately given by the generic amplitude ; recall Eq. (4). Using the two-loop result of , the NNLL cross section with its explicit dependence is
where the anomalous dimension prefactor has been neglected since it is close to unity for the values of and considered here. Thus the product depends only on the three-loop running coupling and the ratio .
In Fig. 5 we show as a function of , fixing the running coupling such that is equal to 1.5. Using the scale , we find the size of the radiative corrections to be significantly reduced. In addition, the reduced scale dependence around is clearly visible. The leading-log approximation with the conventional scale overestimates the magnitude of the cross section by more than 30%, whereas the scale yields a leading-log result only slightly less than the NLL and NNLL results. We conclude that phenomenological studies based on tree-level results are much more reliable when using .
Fixing , we show in Fig. 6 the LL, NLL, and NNLL result for as a function of , displaying the perturbative range . Using Table I, the value of the running coupling can be related to the desired Higgs mass and the center-of-mass energy of the incoming pair. Standard analyses of the cross section using lead to large uncertainties for running coupling larger than about 2. The improved scale greatly reduces the one-loop and two-loop corrections, allowing for predictive cross sections even for close to 4.
The high-energy amplitude given here is completely based on the four-point interactions of the Higgs sector, a good approximation for . For smaller values of , the three-point interactions dominate over the four-point coupling. In addition, the electroweak gauge couplings contribute to the cross section, making a measurement of the Higgs coupling difficult.
In this paper we resolve the mystery, raised in Ref. and deepened in Refs. [9, 10, 11], that the perturbative calculation of Higgs- and Goldstone-boson scattering, at energies large compared with the Higgs mass, is apparently unreliable for rather small values of the running coupling, . The resolution lies in the choice of the scale in the running coupling . All previous analyses have implicitly or explicitly used . We argue that a more appropriate scale is , and show that this scale leads to a dramatic improvement in the convergence of perturbation theory for Higgs- and Goldstone-boson scattering. We find that perturbation theory is apparently reliable up to a coupling , consistent with the perturbative unitarity bound of .
With the improved perturbation theory, we address the question of whether Higgs- and Goldstone-boson scattering can become strongly interacting at energies above the Higgs mass but below the cutoff, modeled by the inverse lattice spacing. We find that the value of the coupling for Higgs- and Goldstone-boson scattering at the cutoff is within the perturbative domain: a strongly-interacting standard Higgs model at high energies is excluded. This is a new result, and complements the result that the Higgs sector cannot be strongly interacting at energies near the Higgs mass [6, 7].
We also consider the decay amplitude of the Higgs boson to Goldstone bosons. In this case we argue that the appropriate scale is the Higgs mass, and we show that perturbation theory is apparently reliable up to a coupling of , which corresponds to a Higgs mass of 700 GeV. This supports the conclusions of Ref. [6, 7]. This is difficult to reconcile with the observation, made in Ref. , that there is a discrepancy between the Higgs width calculated in the expansion and in perturbation theory for GeV. A lattice calculation of the Higgs width with an extrapolation to the case of (nearly) massless Goldstone bosons would be desirable.
The most important aspect of our work is the realization that the apparent breakdown of perturbation theory at weak coupling is simply due to a poor choice of scale in the running coupling. Our argument for the scale is based on an analysis of the constant which accompanies the logarithm in the one-loop bubble diagram. It may be possible to refine this argument further. One might be able to develop a scale-fixing scheme analogous to the BLM method : the number of Goldstone bosons, , could play the role of , the number of light fermions. (The terms proportional to connected to the counterterms should not be included in a BLM analysis.) Naively applying the BLM method at one loop leads to the same scale which we advocate. It is also interesting that our scale lies in the region where the amplitude is quite insensitive to the choice of scale. Therefore the principle of minimal sensitivity is also expected to lead to a scale close to .
Acknowledgements.We are grateful for conversations and correspondance with M. Beneke, A. El-Khadra, U. Heller, P. Mackenzie, M. Neubert, U. Nierste, and P. Weisz. This work was performed in part at the Aspen Center for Physics. S. W. was supported in part by Department of Energy grant DE-FG02-91ER40677.
Appendix A Relation of to the Higgs boson mass
The quantity is defined as the zero of the real part of the inverse Higgs-boson propagator. A physical definition of the Higgs mass, , is the real part of the pole (in the energy plane) of the Higgs propagator [6, 24]. This definition is process-independent and field-redefinition invariant. The relation between and is
Appendix B The running coupling and the function to three loops
To obtain renormalization-group-improved scattering amplitudes, the evolution of as a function of is needed. It is dictated by the renormalization group equation,
Neglecting the appropriate powers of , these equations determine the -loop running coupling for . Explicitly, the one-loop running coupling is
At higher order the solution of Eq. (24) is not unique anymore, and various solutions are discussed in . We take the “consistent solution” introduced in . The -loop running coupling sums the leading logs, next-to-leading logs, and next-to-next-to-leading logs of the physical amplitudes for , respectively.
Appendix C Nonzero Higgs mass effects in the bubble diagram
The Feynman amplitude for Higgs- and Goldstone-boson scattering receives contributions from the massless scalar bubble diagram,
where is the incoming four-momentum, and are the internal particle masses. The logarithm is accompanied by the constant 2. In the limit , the bubble diagrams with internal Higgs propagators () are also well approximated by the massless case . The counterterms appearing in Eq. (11) receive contributions from bubble diagrams with or : the masses of the internal Higgs bosons cannot be neglected, and the corresponding finite pieces are different from 2. To illustrate this we list the different bubble contributions occuring at one loop:
Appendix D The two-loop scattering graphs
At two loops, the only relevant high-energy scattering topologies are and . Their exact results are given in . Expanding in powers of and neglecting terms, they are evaluated as:
Appendix E Summing powers of the bubble diagram
The scale is motivated by the summation of the contribution which comes from the terms at loops. Since is ultraviolet divergent, the terms will also contribute at orders . Here we show that the improved scale also sums those contributions partially, at least as checked up to loops. The exact result of to all orders in is
where is the Riemann Zeta function. Expanding up to yields the numerical result
The term of this expansion contributes to the finite part of the perturbative amplitude at loops and even higher orders. Factoring out the constant 2 which is summed by the scale choice , we find that the coefficients of the power series in of the previous equation are reduced in magnitude:
It is also possible to completely cancel the coefficients to all orders using the scheme.
- S. Weinberg, Phys. Rev. Lett. 19, 1264 (1967); A. Salam, in Elementary Particle Theory: Relativistic Groups and Analyticity (Nobel Symposium No. 8), edited by N. Svartholm (Almqvist and Wiksell, Stockholm, 1968), p. 367.
- This is related to the “triviality” of scalar field theory. For a review see: The Standard Model Higgs Boson, ed. M. Einhorn, Current Physics Sources and Comments, Vol. 8 (North-Holland, Amsterdam, 1991).
- R. Dashen and H. Neuberger, Phys. Rev. Lett. 50, 1897 (1983).
- M. Göckeler, H. Kastrup, T. Neuhaus, and F. Zimmermann, Nucl. Phys. B404, 517 (1993).
- U. Heller, M. Klomfass, H. Neuberger, and P. Vranas, Nucl. Phys. B405, 555 (1993).
- M. Lüscher and P. Weisz, Phys. Lett. B212, 472 (1988).
- M. Lüscher and P. Weisz, Nucl. Phys. B318, 705 (1989).
- L. Durand, J. Johnson, and J. Lopez, Phys. Rev. Lett. 64,, 1215 (1990); Phys. Rev. D 45, 3112 (1992).
- L. Durand, P. Maher, and K. Riesselmann, Phys. Rev. D 48, 1084 (1993).
- K. Riesselmann, Phys. Rev. D 53, 6226 (1996).
- U. Nierste and K. Riesselmann, Phys. Rev. D 53, 6638 (1996).
- D. Dicus and V. Mathur, Phys. Rev. D 7, 3111 (1973).
- B. Lee, C. Quigg, and H. Thacker, Phys. Rev. D 16, 1519 (1977).
- P. Maher, L. Durand, and K. Riesselmann, Phys. Rev. D 48, 1061 (1993); (E) 52, 553 (1995).
- W. Marciano and S. Willenbrock, Phys. Rev. D 37, 2509 (1988).
- A. Ghinculov, Phys. Lett. B337, 137 (1994); (E) 346, 426 (1995).
- W. Marciano, G. Valencia, and S. Willenbrock, Phys. Rev. D 40, 1725 (1989).
- A. Ghinculov, Nucl. Phys. B455, 21 (1995).
- A. Frink, B. Kniehl, D. Kreimer, and K. Riesselmann, TUM-HEP-247/96 (1996) and hep-ph/9606310; to appear in PRD.
- U. Heller, H. Neuberger, and P. Vranas, Nucl. Phys. B399, 271 (1993).
- M. Göckeler, H. Kastrup, J. Westphalen, and F. Zimmermann, Nucl. Phys. B425, 413 (1994).
- S. Brodsky, P. Lepage, and P. Mackenzie, Phys. Rev. D 28, 228 (1983).
- P. Stevenson, Phys. Rev. D 23, 2916 (1981); Nucl. Phys. B203, 472 (1982).
- G. Valencia and S. Willenbrock, Phys. Lett. B247, 341 (1990).
- K.G. Chetyrkin, A.L. Kataev, and F.V. Tkachov, Nucl. Phys. B174, 345 (1980). |
Analytic and Numerical Study of Preheating Dynamics
We analyze the phenomenon of preheating, i.e. explosive particle production due to parametric amplification of quantum fluctuations in the unbroken symmetry case, or spinodal instabilities in the broken symmetry phase, using the Minkowski space vector model in the large limit to study the non-perturbative issues involved. We give analytic results for weak couplings and times short compared to the time at which the fluctuations become of the same order as the tree level terms, as well as numerical results including the full backreaction. In the case where the symmetry is unbroken, the analytical results agree spectacularly well with the numerical ones in their common domain of validity. In the broken symmetry case, interesting situations, corresponding to slow roll initial conditions from the unstable minimum at the origin, give rise to a new and unexpected phenomenon: the dynamical relaxation of the vacuum energy. That is, particles are abundantly produced at the expense of the quantum vacuum energy while the zero mode comes back to almost its initial value. In both cases we obtain analytically and numerically the equation of state which in both cases can be written in terms of an effective polytropic index that interpolates between vacuum and radiation-like domination.
We find that simplified analysis based on harmonic behavior of the zero mode, giving rise to a Mathieu equation for the non-zero modes miss important physics. Furthermore, such analysis that do not include the full backreaction and do not conserve energy, results in unbound particle production. Our results rule out the possibility of symmetry restoration by non-equilibrium fluctuations in the cases relevant for new inflationnary scenarios. Finally, estimates of the reheating temperature are given, as well as a discussion of the inconsistency of a kinetic approach to thermalization when a non-perturbatively large number of particles are created.
It has recently been realized[2, 3, 4] that as the zero momentum mode of a quantum field evolves, it can drive a large amplification of quantum fluctuations. This, in turn, gives rise to copious particle production for bosonic fields, creating quanta in a highly non-equilibrium distribution, radically changing the standard picture of reheating the post-inflationary universe[5, 6, 7]. This process has other possible applications, such as in understanding the hadronization stage of the quark-gluon plasma as well as trying to understand out of equilibrium particle production in strong electromagnetic fields and in heavy ion collisions [9, 10, 11, 12].
The actual processes giving rise to preheating can be different depending on the potential for the scalar field involved as well as the initial conditions. For example, in new inflationary scenarios, where the inflaton field’s zero mode evolves down the flat portion of a potential admitting spontaneous symmetry breaking, particle production occurs due to the existence of unstable field modes which get amplified until the zero mode leaves the instability region. These are the instabilities that give rise to spinodal decomposition and phase separation. In contrast, if we start with chaotic initial conditions, so that the field has large initial amplitude, particles are created from the parametric amplification of the quantum fluctuations due to the oscillations of the zero mode.
In this paper we analyze the details of this so-called preheating process both analytically as well as numerically. Preheating is a non-perturbative process, with typically particles being produced, where is the self coupling of the field. Due to this fact, any attempts at analyzing the detailed dynamics of preheating must also be non-perturbative in nature. This leads us to consider the vector model in the large limit. This is a non-perturbative approximation that has many important features that justify its use: unlike the Hartree or mean-field approximation, it can be systematically improved in the expansion. It conserves energy, satisfies the Ward identities of the underlying symmetry, and again unlike the Hartree approximation it predicts the correct order of the transition in equilibrium.
This approximation has also been used in other non-equilibrium contexts[9, 10, 11, 12]. In this work, we consider this model in Minkowski space, saving the discussion of the effects of the expansion of the universe for later work.
Our findings are summarized as follows.
We are able to provide consistent non-perturbative analytic estimates of the non-equilibrium processes occurring during the preheating stage taking into account the exact evolution of the inflaton zero mode for large amplitudes when the quantum back-reaction due to the produced particles is negligible i.e. at early and intermediate times. We also compute the momentum space distribution of the particle number as well as the effective equation of state during this stage. Explicit expressions for the growth of quantum fluctuations, the preheating time scale, as well as the effective (time dependent) polytropic index defining the equation of state are given in sec. III.
We then go beyond the early/intermediate time regime and evolve the equations of motion numerically, taking into account back-reaction effects. (That is, the non-linear quantum field interaction). These results confirm the analytic results in their domain of validity and show how, when back-reaction effects are large enough to compete with tree level effects, dissipational effects arise in the zero mode. Energy conservation is guaranteed in the full backreaction problem, leading to the eventual shut-off of particle production. This is an important ingredient in the dynamics that determines the relevant time scales.
We also find a novel dynamical relaxation of the vacuum energy in this regime when the theory is in the broken phase. Namely, particles are produced at the expense of the quantum vacuum energy while the zero mode contributes very little. We find a radiation type equation of state for late times () despite the lack of thermal equilibrium.
Finally we discuss the calculation of the reheating temperature in a class of models, paying particular attention to when the kinetic approach to thermalization and equilibration is applicable.
There have been a number of papers (see refs.[2, 3, 13] -) dedicated to the analysis of the preheating process where particle production and back-reaction are estimated in different approximations .
The layout of the paper is as follows. Section II presents the model, the evolution equations, the renormalization of the equations of motion and introduces the relevant definitions of particle number, energy and pressure and the details of their renormalization. The unbroken and broken symmetry cases are presented in detail and the differences in their treatment are clearly explained.
In sections III through V we present a detailed analytic and numerical treatment of both the unbroken and broken symmetry phases emphasizing the description of particle production, energy, pressure and the equation of state. In the broken symmetry case, when the inflaton zero mode begins very close to the top of the potential, we find that there is a novel phenomenon of relaxation of the vacuum energy that explicitly shows where the energy used to produced the particles comes from. We also discuss why the phenomenon of symmetry restoration at preheating, discussed by various authors[3, 14, 26, 27] is not seen to occur in the cases treated by us here and relevant for new inflationnary scenarios .
In section VI we provide estimates, under suitably specified assumptions, of the reheating temperature in the model as well as other models in which the inflaton couples to lighter scalars. In this section we argue that thermalization cannot be studied with a kinetic approach because of the non-perturbatively large occupation number of long-wavelength modes.
Finally, we summarize our results and discuss future avenues of study in the conclusions. We also include an appendix where we gather many important technical details on the evaluation of the Floquet mode functions and Floquet indices used in the main text.
Ii Scalar Field Dynamics in the Large limit
As mentioned above, preheating is a non-perturbative phenomenon so that a non-perturbative treatment of the field theory is necessary. This leads us to consider the vector model in the large limit.
In this section we introduce this model, obtain the non-equilibrium evolution equations, the energy momentum tensor and analyze the issue of renormalization. We will then be poised to study each particular case in detail in the later sections.
The Lagrangian density is the following:
for fixed in the large limit. Here is an vector, and represents the “pions”. In what follows, we will consider two different cases of the potential , with () or without () symmetry breaking.
We can decompose the field into its zero mode and fluctuations about the zero mode:
The generating functional of real time non-equilibrium Green’s functions can be written in terms of a path integral along a complex contour in time, corresponding to forward and backward time evolution and at finite temperature a branch down the imaginary time axis. This requires doubling the number of fields which now carry a label corresponding to forward (), and backward () time evolution. The reader is referred to the literature for more details[15, 16]. This generating functional along the complex contour requires the Lagrangian density along the contour, which for zero temperature is given by
The tadpole condition will lead to the equations of motion as discussed in and references therein.
A consistent and elegant version of the large limit for non-equilibrium problems can be obtained by introducing an auxiliary field and is presented very thoroughly in reference. This formulation has the advantage that it can incorporate the corrections in a systematic fashion. Alternatively, the large limit can be implemented via a Hartree-like factorization in which i) there are no cross correlations between the pions and sigma field and ii) the two point correlation functions of the pion field are diagonal in the space of the remaining unbroken symmetry group. To leading order in large both methods are completely equivalent and for simplicity of presentation we chose the factorization method.
The factorization of the non-linear terms in the Lagrangian is (again for both components):
To obtain a large N limit, we define
where the large N limit is implemented by the requirement that
The leading contribution is obtained by neglecting the terms in the formal large limit. The resulting Lagrangian density is quadratic, with a linear term in :
Note that we have used spatial translational invariance to write
The necessary (zero temperature ) non-equilibrium Green’s functions are constructed from the following ingredients
while the Heisenberg field operator can be written as
where are the canonical destruction and annihilation operators and the quantization volume.
The evolution equations for the expectation value and the mode functions can be obtained by using the tadpole method and are given by:
The initial state is chosen to be the vacuum for these modes, i.e. . The frequencies (i.e. ) will determine the initial state and will be discussed for each particular case below.
The fluctuations obey an independent equation, that does not enter in the dynamics of the evolution of the expectation value or the fields to this order and decouples in the leading order in the large limit.
It is clear from the above equations that the Ward identities of Goldstone’s theorem are fulfilled. Because , whenever vanishes for then and the “pions” are the Goldstone bosons. This observation will be important in the discussions of symmetry breaking in a later section.
Since in this approximation, the dynamics for the and fields decouple, and the dynamics of does not influence that of , the mode functions or , we will only concentrate on the solution for the fields. We note however, that if the dynamics is such that the asymptotic value of the masses for and the “pion” multiplet are different, and the original symmetry is broken down to the subgroup.
ii.1 Renormalization of the Model
In this approximation, the Lagrangian is quadratic, and there are no counterterms. This implies that the equations for the mode functions must be finite. This requires that
The function is written as linear combinations of WKB solutions of the form
with obeying a Riccati equation and the coefficients are fixed by the initial conditions. After some algebra we find
At this point it is convenient to absorb a further finite renormalization in the definition of the mass and introduce the following quantities:
For simplicity in our numerical calculations later, we will chose the renormalization scale . The evolution equations are now written in terms of these dimensionless variables, in which dots now stand for derivatives with respect to .
ii.2 Unbroken Symmetry
In this case , and in terms of the dimensionless variables introduced above we find the following equations of motion:
As mentioned above, the choice of determines the initial state. We will choose these such that at the quantum fluctuations are in the ground state of the oscillators at the initial time. Recalling that by definition , we choose the dimensionless frequencies to be
The Wronskian of two solutions of (39) is given by
while is given by
ii.3 Broken Symmetry
In the case of broken symmetry and the field equations in the limit become:
where is given in terms of the mode functions by the same expression of the previous case, (44). Now the choice of boundary conditions is more subtle. The situation of interest is when , corresponding to the situation where the expectation value rolls down the potential hill from the origin. The modes with are unstable and thus do not represent simple harmonic oscillator quantum states. Therefore one must chose a different set of boundary conditions for these modes. Our choice will be that corresponding to the ground state of an upright harmonic oscillator. This particular initial condition corresponds to a quench type of situation in which the initial state is evolved in time in an inverted parabolic potential (for early times ). Thus we shall use the following initial conditions for the mode functions:
along with the initial conditions for the zero mode given by eq.(41).
ii.4 Particle Number
Although the notion of particle number is ambiguous in a time dependent non-equilibrium situation, a suitable definition can be given with respect to some particular pointer state. We consider two particular definitions that are physically motivated and relevant as we will see later. The first is when we define particles with respect to the initial Fock vacuum state, while the second corresponds to defining particles with respect to the adiabatic vacuum state.
In the former case we write the spatial Fourier transform of the fluctuating field in (9) and its canonical momentum as
with the time independent creation and annihilation operators, such that annihilates the initial Fock vacuum state. Using the initial conditions on the mode functions, the Heisenberg field operators are written as
with the time evolution operator with the boundary condition . The Heisenberg operators are related to by a Bogoliubov (canonical) transformation (see reference for details).
The particle number defined with respect to the initial Fock vacuum state is defined in term of the dimensionless variables introduced above as
It is this definition of particle number that will be used for the numerical study.
with for the unbroken symmetry case and for the broken symmetry case, respectively. When the frequencies are real, the adiabatic modes can be introduced in the following manner:
where now is a canonical operator that destroys the adiabatic vacuum state, and is related to by a Bogoliubov transformation. This expansion diagonalizes the instantaneous Hamiltonian in terms of the canonical operators . The adiabatic particle number is
As mentioned above, the adiabatic particle number can only be defined when the frequencies are real. Thus, in the broken symmetry state they can only be defined for wave-vectors larger than the maximum unstable wave-vector, . These adiabatic modes and the corresponding adiabatic particle number have been used previously within the non-equilibrium context[9, 10, 11] and will be very useful in the analysis of the energy below. Both definitions coincide at because . Notice that due to the fact that we are choosing zero initial temperature. (We considered a non-zero initial temperature in refs.[4, 12]).
ii.5 Energy and Pressure
The energy momentum tensor for this theory is given by
Taking the expectation value in the initial state and the infinite volume limit, defining , and recalling that the tadpole condition requires that the expectation value of vanishes, we find the expectation value of the energy to be
It is now straightforward to prove that this bare energy is conserved using the equations of motion (19)-(21). It is important to account for the last term when taking the time derivative because this term cancels a similar term in the time derivative of .
Since we consider translationally as well as rotationally invariant states, the expectation value of takes the fluid form
We want to emphasize that the full evolution of the zero mode plus the back-reaction with quantum fluctuations conserves energy. Such is obviously not the case in most treatments of reheating in the literature in which back-reaction effects on the zero mode are neglected. Without energy conservation, the quantum fluctuations grow without bound. In cosmological scenarios energy is not conserved but its time dependence is not arbitrary; in a fixed space-time background metric it is determined by the covariant conservation of the energy momentum tensor. There again only a full account of the quantum back-reaction will maintain covariant conservation of the energy momentum tensor.
We can write the integral in eq.(63) as
where is a spatial upper momentum cutoff, taken to infinity after renormalization. In the broken symmetry case, is the contribution to the energy-momentum tensor from the unstable modes with negative squared frequencies, and is the adiabatic particle number given by eq.(59). For the unbroken symmetry case and .
This representation is particularly useful in dealing with renormalization of the energy. Since the energy is conserved, a subtraction at suffices to render it finite in terms of the renormalized coupling and mass. Using energy conservation and the renormalization conditions in the large limit, we find that the contribution is finite. This also follows from the asymptotic behaviors (28).
In terms of dimensionless quantities, the renormalized energy density is, after taking :
where the lower sign and apply to the broken symmetry case while the upper sign and correspond to the unbroken symmetry case. The constant is chosen such that coincides with the classical energy for the zero mode. The quantity is identified as the effective (dimensionless) mass for the “pions”.
The pressure is obtained from the spatial components of the energy momentum tensor (see eq.(64)) and we find the expectation value of the pressure density to be given by
Using the large- behavior of the mode functions (28), we find that aside from the time independent divergence that is present also in the energy the pressure needs an extra subtraction compared with the energy. Such a term corresponds to an additive renormalization of the energy-momentum tensor of the form
with a (divergent) constant. Performing the integrals with a spatial ultraviolet cutoff, and in terms of the renormalization scale introduced before, we find
In terms of dimensionless quantities and after subtracting a time independent quartic divergence, we finally find setting ,
At this stage we can recognize why the effective potential is an irrelevant quantity to study the dynamics.
The sum of terms without in (67) for are identified with the effective potential in this approximation for a time independent . These arise from the “zero point” energy of the oscillators with time dependent frequency in (65).
In the broken symmetry case the term describes the dynamics of the spinodal instabilities since the mode functions will grow in time. Ignoring these instabilities and setting as is done in a calculation of the effective potential results in an imaginary part. In the unbroken symmetry () case the sum of terms without give the effective potential in the large limit, but the term describes the profuse particle production via parametric amplification, the mode functions in the unstable bands give a contribution to this term that eventually becomes non-perturbatively large and comparable to the tree level terms as will be described in detail below. Clearly both in the broken and unbroken symmetry cases the effective potential misses all of the interesting non-perturbative dynamics, that is the exponential growth of quantum fluctuations and the ensuing particle production, either associated with unstable bands in the unbroken symmetry case or spinodal instabilities in the broken symmetry phase.
The expression for the renormalized energy density given by (67-69) differs from the effective potential in several fundamental aspects: i) it is always real as opposed to the effective potential that becomes complex in the spinodal region, ii) it accounts for particle production and time dependent phenomena.
The effective potential is a useless tool to study the dynamics precisely because it misses the profuse particle production associated with these dynamical, non-equilibrium and non-perturbative processes.
Iii The Unbroken Symmetry Case
iii.1 Analytic Results
In this section we turn to the analytic treatment of equations (38,39,44) in the unbroken symmetry case. Our approximations will only be valid in the weak coupling regime and for times small enough so that the quantum fluctuations, i.e. are not large compared to the “tree level” quantities. We will see that this encompasses the times in which most of the interesting physics occurs.
Since , the back-reaction term is expected to be small for small during an interval say . This time , to be determined below, determines the relevant time scale for preheating and will be called the preheating time.
During the interval of time in which the back-reaction term can be neglected, we can solve eq.(38) in terms of elliptic functions, with the result:
where cn stands for the Jacobi cosine. Notice that has period , where is the complete elliptic integral of first kind. In addition we note that since
if we neglect the back-reaction in the mode equations, the ‘potential’ is periodic with period . Inserting this form for in eq.(39) and neglecting yields
This is the Lamé equation for a particular value of the coefficients that make it solvable in terms of Jacobi functions . We summarize here the results for the mode functions. The derivations are given in the Appendix.
Since the coefficients of eq.(80) are periodic with period , the mode functions can be chosen to be quasi-periodic (Floquet type) with quasi-period .
where the Floquet indices are independent of . In the allowed zones, is a real number and the functions are bounded with a constant maximum amplitude. In the forbidden zones has a non-zero imaginary part and the amplitude of the solutions either grows or decreases exponentially.
Obviously, the Floquet modes cannot obey in general the initial conditions given by (20) and the proper mode functions with these initial conditions will be obtained as linear combinations of the Floquet solutions. We normalize the Floquet solutions as
We can now express the modes with the proper boundary conditions (see 20) as the following linear combinations of and
where is the Wronskian of the two Floquet solutions
and the forbidden bands to
The last forbidden band is for positive and hence will contribute to the exponential growth of the fluctuation function .
The mode functions can be written explicitly in terms of Jacobi -functions for each band. We find for the forbidden band,
where is a function of in the forbidden band defined by
and is the Jacobi zeta function . It can be expanded in series as follows |
Mathematical Institute, University of Oxford,
24-29 St. Giles’, Oxford OX1 3LB, England
School of Natural Sciences, Institute for Advanced Study,
Einstein Drive, Princeton, NJ 08540, USA
Theory Group, Department of Physics, University of Texas at Austin,
Austin, TX, 78712, USA
Center for Geometry and Theoretical Physics, Duke University,
Durham, NC 27708, USA
We study new nonperturbative phenomena in heterotic string vacua corresponding to pointlike bundle singularities in codimension three. These degenerations result in new four-dimensional infrared physics characterized by light solitonic states whose origin is explained in the dual F-theory model. We also show that such phenomena appear generically in Higgsing and describe in detail the corresponding bundle transition.
1. Introduction and Overview
A remarkable achievement of string theory in recent years consists of understanding various nonperturbative effects associated to the breakdown of worldsheet conformal field theory. An example which has received much attention in the literature is the small instanton singularity in heterotic string theories . These singularities occur in the context of heterotic string theories compactified on a K3 surface and are associated to the simplest pointlike degenerations of the background gauge bundles. Such degenerations have been shown to result in nonperturbative effects in six dimensions which can be understood either in terms of D-brane physics [1–5], or more generally, from the point of view of F-theory [6–9]. Various nonperturbative aspects of four dimensional heterotic strings have also been studied in detail [10–19]. The common feature of all these effects is that they can be ultimately related to the six dimensional small instanton singularity by an adiabatic argument. They have been accordingly interpreted in terms of heterotic fivebranes wrapping holomorphic curves in the Calabi–Yau threefold. From a mathematical point of view, such CFT singularities correspond to codimension-two bundle degenerations, precisely as in the six dimensional situation.
In this paper we consider a new class of nonperturbative effects specific to heterotic compactifications on Calabi–Yau threefolds. The singularities treated in the present work are qualitatively new, being associated to codimension-three bundle degenerations. This is a novel class of degenerations which have not been studied in physics so far and which are specific to four dimensional compactifications. As such, we expect qualitatively new infrared effects in four dimensional theories which will be discussed below.
The main tool for analyzing these singularities is heterotic/F-theory duality which encodes the bundle data in the geometry of a singular Calabi–Yau fourfold. This gives a pure geometric interpretation of the perturbative heterotic spectrum and determines at the same time the nonperturbative massless spectrum associated to CFT singularities. Generically, pointlike bundle singularities are expected to result in a certain type of spacetime defect where the nonperturbative degrees of freedom are localized. While this is also the case here, the nature of the resulting defect is very hard to understand. This is caused by our poor understanding of codimension-three degenerations of solutions to the Donaldson–Uhlenbeck–Yau equation. In particular, no explicit throat-like supergravity solution is known in this case.
In order to gain some insight into the nature of these singularities, it may be helpful to highlight the most important physical aspects by comparison with the small instanton transition. The bundle acquires pointlike degenerations which can be regarded as three dimensional defects filling space-time. However, there are no such stable excitations in the bulk M-theory, therefore such a defect is effectively stuck to the nine dimensional wall. This fact makes its physical properties quite obscure since it is not clear how to identify the light states governing the dynamics.
The F-theory picture is however more explicit. As expected, the bundle singularities correspond to special points on the F-theory base where the elliptic fibration develops certain non-generic singularities. These are superficially similar to the singularities occurring in the F-theory presentation of small instantons. So one might think by analogy that each such defect would correspond to a blowup of the three complex dimensional base. In fact, this is not the case since it will be shown in section two that the smooth fourfold obtained by blowing up the base is not Calabi–Yau. This is in good agreement with the absence of a ‘Coulomb branch’ noted previously (since the size of the exceptional divisor would be related to a displacement of the defect in the M theory bulk, which is forbidden).
Quite remarkably, it turns out that in the present case, there exist Calabi–Yau resolutions involving only fiber blowups. Recall that the resolution of the typical ADE singular fibers occurring in F-theory consists of a chain of two-spheres with specific intersection numbers in agreement with the corresponding Dynkin diagram. On top of each point in the base we have generically such a collection of spheres. A careful analysis reveals the fact that above the special singular points the resolved fiber contains an entire complex surface, i.e., a manifold of dimension four rather than a collection of two-spheres. Even more surprising is the fact that the occurrence of this surface is basically automatic; no extra blowups are necessary and there are no extra generators of the Kähler cone.
This has interesting consequences for physics, which are easier to understand by compactifying the four dimensional F-theory model down to three dimensions on a circle of radius . According to standard duality, this is equivalent to M theory on the resolved fourfold, the size of the elliptic fiber being proportional to . The presence of the surface in the fiber results in new light degrees of freedom in the low energy spectrum. We can have a string corresponding to the M fivebrane wrapped on the surface and a tower of particle states arising by wrapping membranes on holomorphic curves in . We regard the nonperturbative massless excitations as a sign of a singularity in the heterotic CFT. However, at the present stage it is very hard to get more insight into the low energy dynamics.
After this outline of the physics, let us describe next the precise context in which such singularities may be encountered. It is a common fact in string theory that singularities of various sorts are associated to phase transition between string vacua. As discussed in more detail later, it turns out that the pointlike singularities considered here appear generically in the context of the Higgs phenomenon in F-theory. More explicitly, we consider a typical transition corresponding to a family of singular Calabi–Yau fourfolds with generic fiber singularity which is enhanced to along a subspace of the moduli space. Technically, such an enhancement is realized by setting to zero certain parameters of the Weierstrass model. When this apparently simple transition is studied in detail one notices the presence of extra codimension-three singularities and the nonperturbative phenomena described above.
We can get a new perspective on this transition by making use of the spectral cover construction of Friedman, Morgan, and Witten . At generic points in the moduli space we have a smooth holomorphic bundle of rank three. At the transition point, this bundle degenerates in a controlled manner to a singular object which is technically a coherent sheaf. Coherent sheaves have made their appearance in a number of places in physics. For example, in the linear sigma model approach to modes , the monad construction of the gauge bundle often results in a reflexive coherent sheaf rather than a bundle. However, at least in the examples studied in , reflexive sheaves define non-singular CFT’s without the exotic phenomena described above. Given the fact that singularities in string theory tend to have a universal local behavior, it is reasonable to assume that this is the generic behavior.
In fact, this is consistent with the spectral cover description of the transition. We will show in section three that, at the transition point, the bundle degenerates to a non-reflexive rank three sheaf. Moreover, this sheaf admits a natural local decomposition as a sum of a rank two reflexive sheaf and the ideal sheaf of a point. After the transition, the heterotic vacuum will be described accordingly as having two distinct sectors. We have a perturbative CFT part corresponding to the reflexive rank two sheaf, which gives an gauge group and a certain number of matter multiplets. The second sector consists of nonperturbative degrees of freedom localized at certain points in the Calabi–Yau threefold, and corresponds to the ideal sheaves. This is quite similar to the small instanton effects in six dimensions, one of the main differences being the absence of a Coulomb branch.
This concludes our brief overview of pointlike bundle singularities in string theories. More details and explicit constructions are presented in the next sections. We discuss the transition in F-theory, and explain the occurrence of the surfaces in section two. The heterotic picture, based on the spectral cover approach, is presented in section three. Some technical details are postponed to an appendix.
Our starting point for the F-theory description is a Calabi–Yau fourfold which is dual to the heterotic string on a Calabi–Yau threefold with a certain gauge bundle. The unbroken gauge group then appears as a singularity in the elliptic fiber of the Calabi–Yau fourfold.
We will choose the heterotic Calabi–Yau threefold to be elliptically fibered over a base , which we choose to be the ruled surface ; this threefold has Hodge numbers . The dual Calabi–Yau fourfold is then elliptically fibered over a threefold base which can be viewed as the total space of the projective bundle , where , and is some effective divisor in the base , related to the class describing the heterotic bundle by the relation [23,13,14]. Similar models have been considered in a different context in .
The fourfold is described in the vicinity of a section by the Weierstrass equation
within the bundle . We have and , the normal bundle of in , and we denote by one of its sections, i.e., . The geometry of the split singularity over the section (which corresponds to gauge group) is then encoded in the following expressions for , and the discriminant :
where are sections of certain line bundles over : , , , and .
The fourfold described by (2.1) can be resolved to a nonsingular Calabi–Yau fourfold, in which the singularities of split type are replaced by rational curves whose intersection matrix reproduces the Dynkin diagram of (generically). No surprises are encountered during this resolution. One way to describe the resulting F-theory model is as a limit from three dimensions: first, compactify M-theory on the nonsingular Calabi–Yau fourfold, then consider the limit in which all fiber components introduced during the resolution of the Weierstrass model acquire zero area (leading to enhanced gauge symmetry), and finally, take the F-theory limit by sending the area of the elliptic fibers to zero, opening up a new effective dimension.
The transition (un-Higgsing) to unbroken gauge group is described in F-theory terms by the condition , which results in the singularity being enhanced to fibers. Let us describe in some detail the geometry of this singular fourfold.
Under the condition , the discriminant actually has a factor of :
leading to fibers. The component of the discriminant is defined by the equation ; this latter equation is a cubic equation in whose discriminant with respect to is given by:
Let denote the locus , therefore the matter curve is . Then, it is easy to see that the locus is precisely . Finally, by and we denote the loci and respectively. The singularity type is enhanced to over the matter curve , which is the intersection locus of with the nodal part of the discriminant, . More interesting, the vanishing orders jump to over the intersection locus . (In the familiar case of an elliptic surface, this would be the signal that the Weierstrass model was not minimal. However, in the present context there is no birational change which can be made which would reduce those orders of vanishing.) The set where the vanishing orders jump to is precisely the singularity set of the corresponding heterotic sheaf (which would otherwise be a bundle, were it not for the presence of this locus). There is a cusp curve inside , which projects onto the curve in . The geometry of the singular fourfold is sketched in Figure 1.
As explained in the introduction, resolution of this locus results in the appearance of entire complex surfaces over the locus in . This phenomenon is most efficiently observed by performing a weighted blowup of the Weierstrass model, which we now proceed to describe.
2.2. The Weighted Blowup
The weighted blowup is performed by introducing an additional variable , and assigning weights as follows:
We now rewrite (2.1) as a homogeneous degree 5 equation in the variables as follows
This is the weighted blowup of (2.1). In the patch , it is equivalent to (2.1), but when , we get
Note that over the point , this vanishes identically. Thus parametrise a family of hypersurfaces in except at , when we obtain all of . This is precisely the complex surface mentioned before. Its occurance is a direct consequence of the vanishing orders of the discriminant along .
The weighted blowup is really only the first stage of a complete toric resolution of the singularity, and thus does not introduce any extra Kähler classes beyond those already needed for the usual resolution of singularities. We present some further evidence for this lack of additional Kähler classes by studying an explicit toric example in the next subsection.†† It is also possible to perform an explicit local resolution using the technique of , which would be related by generalized flops to the resolution presented in this section.
2.3. Toric Example
We now construct an explicit toric model of a Calabi–Yau fourfold elliptically fibered over base (i.e., , where and are the classes in ), with a section of split singularities. This fourfold is dual to heterotic strings compactified on the Calabi–Yau threefold which is elliptically fibered over , with an bundle with , and .
From index theorems and anomaly cancellation (see, for instance [13,14]), we expect the following Hodge numbers, .
The Calabi–Yau fourfold may be constructed as a hypersurface in a toric variety following the prescription of [13,14]. The dual polyhedron , which encodes the divisors of the polyhedron has vertices:
Standard toric methods [13,14] give, for the Hodge numbers of the fourfold, , in agreement with expectations. Note that the Euler characteristic is not divisible by , indicating that the model has a background -flux turned on. Moreover, it is possible to find a triangulation of the polyhedron consistent with its elliptic fibration structure, such that each of the top dimensional cones has unit volume, guaranteeing smoothness of the corresponding Calabi–Yau fourfold. We assert that this polyhedron gives the F-theory dual of the heterotic vacuum described above.
We can now study the effect of un-Higgsing the unbroken gauge group to . The heterotic bundle is now , with , and unchanged. Index theorems and anomaly cancellation predict, for the dual Calabi–Yau fourfold, Hodge numbers .
The dual polyhedron describing the Calabi–Yau fourfold has vertices
The fourfold has the following Hodge numbers, in accordance with our expectations: . Once again, it is possible to find a triangulation of the of polyhedron consistent with its elliptic fibration structure, such that each of the top dimensional cones has unit volume, guaranteeing smoothness of the corresponding Calabi–Yau fourfold.
It should be emphasized here that no extra Kähler classes other than the ones corresponding to the resolution of the locus are present in the fourfold. Since we expect, on general grounds, that the resolution of the singularity yields an entire complex surface over specific points in , we conclude that the appearance of the complex surface in the resolution of the locus does not introduce any extra Kähler classes.
The corresponding F-theory model has some features whose physical effects are difficult to explain in detail. We begin as before in three dimensions, with M-theory compactified on the nonsingular Calabi–Yau fourfold. When we allow the rational curves in the fibers to shrink to zero area, again we get enhanced gauge symmetry, but this time there are surfaces shrinking to points as well as curves shrinking to points. Wrapping the M-theory fivebrane on such surfaces suggests that the spectrum should contain light strings, while wrapping the M-theory membrane on curves within such surfaces would produce a tower of light particle states. All of these states are presumably present as well in the F-theory limit.
2.4. Comparison with Codimension-Two
It is worthwhile making a comparison between the geometry of these codimension-three singularities, and the analogous phenomenon in codimension-two. In the latter case, the F-theory interpretation of a small instanton singularity is that the total space of the elliptic fibration has acquired a singularity which can be resolved by a combination of blowing up the base of the fibration and blowing up the total space . (It is the blowup of the base which leads to an additional branch of the moduli space.) In the codimension-three case, however, blowing up the base is not possible, because it destroys the Calabi–Yau condition.
To see this, consider first a simple model of the codimension-two phenomenon, represented by the Weierstrass equation
One of the coordinate charts when blowing up the base is , ; in that chart, the Weierstrass equation becomes
To make this new Weierstrass equation minimal, we must also change coordinates in and , using , . Our final Weierstrass equation is then
(This two-step change of variables corresponds to the two-step geometric process of a blowup and a flop which was used in to describe this transition.)
According to the Poincaré residue construction, the holomorphic three-form was originally represented by
where represents the partial derivative of (2.7) with respect to the variable (which is not present in the numerator). In the new coordinate system (the minimal model of the blowup) this becomes
Since this latter is the Poincaré residue representation of a holomorphic three-form for the blown up threefold (2.9), our original three-form has acquired neither a zero nor a pole during this process. Thus, both threefolds can be Calabi–Yau and there is a transition between them. (Note that even though we only made the computation in a single coordinate chart, the order of zero or pole of the holomorphic three-form would be the same in any coordinate chart, so this is actually a complete argument.)
By contrast, let us make a similar computation for a fourfold, starting from the Weierstrass equation
We can represent one of the coordinate charts of the blowup by , , ; in that chart, the Weierstrass equation becomes
We again get a non-minimal Weierstrass model, which can be made minimal by the further coordinate change , . Our final Weierstrass equation is then
The holomorphic four-form was originally represented by
and in the minimal model of the blowup this becomes
Since this is times the Poincaré residue representation of a holomorphic four-form for the blown up fourfold (2.14), our original four-form has acquired a zero along the exceptional divisor . Thus, at most one of these two fourfolds can be Calabi–Yau (i.e., have a non-vanishing holomorphic four-form), and there is no physical transition between them.
3. Singular Bundles and Transitions
3.1. Spectral Data
We begin with a short review of the spectral cover approach to bundles on elliptic fibrations . Let be a smooth elliptic Calabi–Yau variety with a section . As usual, we assume that is moreover a cubic hypersurface in , where is the anticanonical line bundle of the base.
According to , the moduli space of rank semistable bundles with trivial determinant on a smooth elliptic curve is isomorphic to the linear system , where is the origin of . This construction works for families of elliptic curves as well, if the singular fibers are either nodal or cuspidal curves. For the Weierstrass model introduced before, this yields a relative coarse moduli space which is isomorphic to the relative projective space . If is a rank bundle whose restriction to every fiber is semistable and regular with trivial determinant, then determines a section . Such a section is uniquely given by a line bundle over , and sections , .
The converse is not true, i.e., a section does not uniquely determine a bundle . Friedman, Morgan, and Witten construct certain basic bundles associated to a section together with an integer . The construction is rather involved and it will not be reviewed here in detail. After some work, it can be shown that it is equivalent to the standard spectral cover construction . Namely, the section determines a spectral cover which belongs to the linear system , where . In order to construct , let us consider the following diagram |
Can you cover the camel with these pieces?
If you split the square into these two pieces, it is possible to fit the pieces together again to make a new shape. How many new shapes can you make?
What happens when you try and fit the triomino pieces into these two grids?
How many different cuboids can you make when you use four CDs or DVDs? How about using five, then six?
10 space travellers are waiting to board their spaceships. There are two rows of seats in the waiting room. Using the rules, where are they all sitting? Can you find all the possible ways?
Swap the stars with the moons, using only knights' moves (as on a chess board). What is the smallest number of moves possible?
What is the best way to shunt these carriages so that each train can continue its journey?
Can you shunt the trucks so that the Cattle truck and the Sheep truck change places and the Engine is back on the main line?
Take a rectangle of paper and fold it in half, and half again, to make four smaller rectangles. How many different ways can you fold it up?
Building up a simple Celtic knot. Try the interactivity or download the cards or have a go on squared paper.
How will you go about finding all the jigsaw pieces that have one peg and one hole?
In how many ways can you fit two of these yellow triangles together? Can you predict the number of ways two blue triangles can be fitted together?
Can you work out how many cubes were used to make this open box? What size of open box could you make if you had 112 cubes?
Design an arrangement of display boards in the school hall which fits the requirements of different people.
A magician took a suit of thirteen cards and held them in his hand face down. Every card he revealed had the same value as the one he had just finished spelling. How did this work?
Use the three triangles to fill these outline shapes. Perhaps you can create some of your own shapes for a friend to fill?
Find your way through the grid starting at 2 and following these operations. What number do you end on?
A dog is looking for a good place to bury his bone. Can you work out where he started and ended in each case? What possible routes could he have taken?
What happens to the area of a square if you double the length of the sides? Try the same thing with rectangles, diamonds and other shapes. How do the four smaller ones fit into the larger one?
What is the least number of moves you can take to rearrange the bears so that no bear is next to a bear of the same colour?
You have 4 red and 5 blue counters. How many ways can they be placed on a 3 by 3 grid so that all the rows columns and diagonals have an even number of red counters?
Hover your mouse over the counters to see which ones will be removed. Click to remove them. The winner is the last one to remove a counter. How you can make sure you win?
How many different ways can you find of fitting five hexagons together? How will you know you have found all the ways?
A tetromino is made up of four squares joined edge to edge. Can this tetromino, together with 15 copies of itself, be used to cover an eight by eight chessboard?
A toy has a regular tetrahedron, a cube and a base with triangular and square hollows. If you fit a shape into the correct hollow a bell rings. How many times does the bell ring in a complete game?
What is the smallest cuboid that you can put in this box so that you cannot fit another that's the same into it?
When I fold a 0-20 number line, I end up with 'stacks' of numbers on top of each other. These challenges involve varying the length of the number line and investigating the 'stack totals'.
In this town, houses are built with one room for each person. There are some families of seven people living in the town. In how many different ways can they build their houses?
This task, written for the National Young Mathematicians' Award 2016, involves open-topped boxes made with interlocking cubes. Explore the number of units of paint that are needed to cover the boxes. . . .
How many DIFFERENT quadrilaterals can be made by joining the dots on the 8-point circle?
Can you fit the tangram pieces into the outline of this goat and giraffe?
Can you fit the tangram pieces into the outlines of these clocks?
Can you fit the tangram pieces into the outlines of the lobster, yacht and cyclist?
Can you fit the tangram pieces into the outline of the child walking home from school?
Use the lines on this figure to show how the square can be divided into 2 halves, 3 thirds, 6 sixths and 9 ninths.
Can you fit the tangram pieces into the outline of Little Ming playing the board game?
How many balls of modelling clay and how many straws does it take to make these skeleton shapes?
Can you work out what shape is made by folding in this way? Why not create some patterns using this shape but in different sizes?
Can you fit the tangram pieces into the outline of Little Ming?
Can you logically construct these silhouettes using the tangram pieces?
Can you fit the tangram pieces into the outline of this telephone?
Make a flower design using the same shape made out of different sizes of paper.
Can you fit the tangram pieces into the outline of this brazier for roasting chestnuts?
Can you fit the tangram pieces into the outline of Little Fung at the table?
On which of these shapes can you trace a path along all of its edges, without going over any edge twice?
Can you fit the tangram pieces into the outlines of these people?
Can you fit the tangram pieces into the outlines of the workmen?
Can you fit the tangram pieces into the outlines of the chairs?
Can you arrange the shapes in a chain so that each one shares a face (or faces) that are the same shape as the one that follows it?
Can you fit the tangram pieces into the outline of Little Ming and Little Fung dancing? |
UNIVERSITY OF WATERLOO DEPARTMENT OF MANAGEMENT SCIENCES MSCI603 - Principles of Operations Research Problem set 3 Problem 1 Cornco produces two products: PS and QT. The sales prices for each product and the maximum quantity of each that can be sold during each of the next three months are given in the table below. Product PS QT Month 1 Price ($) Demand 40 50 35 43 Month 2 Price ($) Demand 60 45 40 50 Month 3 Price ($) Demand 55 50 44 40 Each product must be processed through two assemble lines: 1 and 2. The number of hours required by each product on each assembly line is given below: Hours Product PS QT Line 1 3 2 Line 2 2 2 The number of hours available on each assembly line during each month is given below: Line 1 2 1 1200 2140 Month 2 160 150 3 190 110 Each unit of PS requires 4 pounds of raw material; each unit of QT requires 3 pounds. As many as 710 units of raw material can be purchased at $3 per pound. At the beginning of month 1, 10 units of PS and 5 units of QT are available. It costs $10 to hold a unit of either product in inventory for a month. Solve this LP in Excel Solver and use your output to answer the following questions. (Please include a print-out of your input and the sensitivity report.) Hint: Let Pi = units of PS produced in month i, PSi = units of PS sold in month i, IPi = inventory of product P at end of month i, Qi = units of QT produced in month i, QSi = units of QT sold in month i, IQi = units of QT in inventory at end of month i, RM = pounds of raw material purchased. a) Find the new optimal solution if it costs $11 to hold a unit of PS in inventory at the end of month 1. b) Find the company’s new optimal solution if 210 hours on line 1 are available during month 1. c) Find the company’s new profit level if 109 hours are available on line 2 during month 3. d) What is the most Cornco should be willing to pay for an extra hour of line 1 time during month 2? e) What is the most Cornco should be willing to pay for an extra pound of raw material? f) What is the most Cornco should be willing to pay for an extra hour of line 1 time during month 3? g) Find the new optimal solution if PS sells for $50 during month 2. h) Find the new optimal solution if QT sells for $50 during month 3. i) Suppose spending $20 on advertising would increase demand for QT in month 2 by 5 units. Should the advertising be done? Solution MAX 40 PS1 + 60 PS2 + 55 PS3 + 35 QS1 + 40 QS2 + 44QS3 - 3 RM 10(IP1+IP2+IP3+IQ1+IQ2+IQ3) s.t. 2) P1S ≤ 50 3) P2S ≤ 45 4) P3S ≤ 50 5) Q1S ≤ 43 6) Q2S ≤ 50 7) Q3S ≤ 40 8) 3 P1 + 2 Q1 ≤ 1200 9) 3 P2 + 2 Q2 ≤ 160 10) 3 P3 + 2 Q3 ≤ 190 11) 2 P1 + 2 Q1 ≤ 2140 12) 2 P2 + 2 Q2 ≤ 150 13) 2 P3 + 2 Q3 ≤ 110 14) PS1 + IP1 - P1 = 10 15) PS2 - IP1 + IP2 - P2 = 0 16) PS2 - IP2 + IP3 - P3 = 0 17) QS1 + IQ1 - Q1 = 5 18) QS2 - IQ1 + IQ2 - Q2 = 0 19) QS3 - IQ2 + IQ3 - Q3 = 0 20) - RM + 4 P1 + 3 Q1 + 3 Q2 + 4 P2 + 4 P3 + 3 Q3 ≤ 0 21) RM ≤ 710 All variables ≥ (See Excel file for Sensitivity and Answer Reports) a) Allowable increase for 𝐼𝑃1 is 4, so a $1 increase is within the allowable limit. Profit Down by ($1)* 𝐼𝑃1 = $25. New Profit = 7705 - 25 = $7680. b) This is a non-binding constraint, so the Shadow Price = 0 and allowable decrease is 1010.75. Constraint has positive slack. Thus new optimal solution remains the same. c) Allowable decrease is 10 hours, so this 1 hour decrease is within the allowable limit. The shadow price for the constraint is 7, so the new profit is 7705 – 7(1) = $7698. d) The most Cornco will pay is equal to the shadow price of this constraint = $3.33 e) From constraint 20, the shadow price $10. This is what we would gain if given a "free pound of raw material. Thus we would pay up to $10. f) This is a non-binding constraint, meaning there is slack in this constraint. We are not willing to pay anything for any additional time (shadow price is 0). g) A decrease of 10 is within the allowable decrease for this variable, so decision variables remain the same. New z-value = 7705 - 10(45) = $7255. h) An increase of 6 is not within the allowable increase of 1, so current basis is no longer optimal and question cannot be answered from current printout. Problem 2 A furniture company makes tables (T) and chairs (C), and sells them to customers either finished (F) or unfinished (U). The amount of wood and labor needed, and the selling price of each product are shown in the next table: Product Wood (ft2) Labor (hr) Price ($) UT 40 2 70 FT 40 5 140 UC 30 2 60 FC 30 4 110 If 40,000 board feet of wood and 6,000 hours of skilled labor are available. Use the sensitivity analysis output report to answer the next parts, independently. a) How much improvement in the profit can be achieved by: i. increasing the quantity of wood available by 3,000 ft2, ii. increasing the number of labor hours available by 500 hours. b) How much rise in the price of finished tables is needed for the company to start producing them? c) If the company reduces the selling price of finished chair by $5, will it need to change its product mix to optimize its profit. Solution Below is a copy of the sensitivity analysis report generated by Excel Variable Cells Final Reduced Objective Allowable Allowable Cell Name Value Cost Coefficient Increase Decrease $B$4 UT 0 -76.66666667 70 76.66666667 1E+30 $C$4 FT 0 -6.666666667 140 6.666666667 1E+30 $D$4 UC 0 -50 60 50 1E+30 $E$4 FC 1333.333333 0 110 1E+30 5 Constraints Final Shadow Constraint Allowable Allowable Cell Name Value Price R.H. Side Increase Decrease $F$5 Wood LHS 40000 3.666666667 40000 5000 40000 $F$6 Labor LHS 5333.333333 0 6000 1E+30 666.6666667 a) i. Allowable increase of wood quantity without changing the basis is 5000, so we can use sensitivity analysis. Shadow price of wood = 3.67, so increasing the quantity of wood by 3000 will increase the profit by $11,000. ii. The shadow price of labor hour is zero (the constraint is non-binding), so increasing the quantity of labor hours will not affect the profit. b) The reduced cost of finished tables (FT) is 6.67, so the price of finished tables must increase by this amount before the company starts producing them. c) The allowable decrease in the FC coefficient in the objective function is 5, so the company doesn’t need to change its product mix. Problem 3 A dairy producer makes two types of cheese. The only scarce resource that is needed to produce cheese is skilled labor. The company has two specialized workers. The first (W1) is willing to work for up to 40 hours per week and is paid $25 per hour. The second (W2) is willing to work for up to 50 hours per week and is paid $30 per hour. The time for each worker and raw materials costs required to produce a unit of each type of cheese and its selling price are shown in the table below. Type 1 2 W1 (hr) 1 2 W2 (hr) 2 2 Raw material cost ($) 250 200 Selling Price ($) 400 420 a. Formulate an LP model for this problem to maximize the profit of the dairy producer. b. Solve the LP model graphically. c. From the graphical solution obtained in (b) determine the range of prices of types 1 and 2 cheese at which the current basis remain optimal. d. If W1 worker is willing to work only 30 hours per week, would the current basis remain optimal? Would the optimal solution change? e. Determine the maximum amount that should be paid to each worker for an additional hour of work every week. f. If W2 worker is willing to work only 48 hours, what would the company’s profit be? Verify your answer graphically. Solution a. x1 = the number of type 1 cheese produced x2 = the number of type 2 cheese produced max (400‐250)x1+(420‐200)x2 ‐25(x1+2x2) – 30(2x1+2x2) s.t. x1+2x2 <= 40 2x1+2x2 <= 50 x1>=0, x2 >=0 b. The optimal solution is the intersecting point of the two constraints, which is (10,15) and a profit of 2300. c. The current solution will remain optimal as long as the slope of the iso-profit line remains between the slopes of the binding constraints. For type 1 cheese: 1 c1 2 − 2 ≥ − 110 ≥ − 2 so the price of type 1 cheese can range between 390 and 445 $/unit without changing the optimal solution. For type 2 cheese: 1 65 1 − 2 ≥ − c2 ≥ − 1 so the price of type 2 cheese can range between 375 and 440 $/unit without changing the optimal solution. d. Yes. The new optimal solution will be (x1,x2) = (20,5) and a profit of 1850. e. The shadow price is (y1,y2) = (45,10) The maximum amount that should be paid for an additional hour per week to worker 1 is 45. The maximum amount that should be paid for an additional hour per week to worker 2 is 10. f. The company’s profit decreases by 10*(50‐48) = 20. The graphical solution is (x1,x2) = (8,16), and the profit is 2280. Problem 4 A workshop makes two types of hand-made genuine leather products: wallets and belts. Each wallet requires 1 square foot of leather and 30 minutes of labor time. Each belt requires 2 square feet of leather and 20 minutes of labor time. The workshop earns $40 profit from each wallet sold and $50 from each belt sold. Each day there are 500 square feet of leather and 200 labor-hours available. a. Formulate an LP for this problem and solve it using the graphical method. b. What is the maximum amount the workshop shall be willing to pay for an extra hour of labor? c. What is the maximum amount the workshop shall be willing to pay for an extra square foot of leather? d. What is the profit range of wallets that will keep the optimal solution unchanged? e. What is the profit range of belts that will keep the optimal solution unchanged? f. By how much the workshop can increase or decrease the quantity of leather available without changing its optimal product mix? g. What is the impact of increasing the labor time by 10 hours daily on the workshop profit? Solution a. Let W be the number of wallets and B be the number of belts produced daily. Max 40 W + 50 B s.t. W + 2B <= 500 (Leather constraint) a. W/2 + B/3 <= 200 (Labor constraint) b. W, B >= 0 By solving using the graphical method we find that: W* = 350, B* = 75, Z* = 17750 b. The workshop shall not pay more than the dual price of the second constraint. To find it we solve the two equations: W + 2B = 500 W/2 + B/3 = 201 The solution is: W = 353, B = 73.5 By plugging these values in the objective function, we get Z_new = 17795 Dual price = Z_new – Z_old = $45 c. The same method as in part (b) d. We need to find the range of optimality. For the optimal solution to remain unchanged, the slope of the iso-profit line must remain within the slopes of the binding constraints. (-3/2) <= (-c/50) <= (-1/2) 25 <= c <= 75 So the profit range of wallets can range between 25 and 75 without changing the optimal solution e. The same method as in part (d) f. To maintain the same product mix, the binding constraints must remain binding. So, the first constraint can shift parallel to itself between points (400,0) and (0,600). The RHS of the leather constraint at these points has values of 400 and 1200, respectively. So this is the range of leather availability at which both products are in the optimal product mix. g. Z_new – Z_old = change in RHS * dual price = 10 * 45 (from part b) = $450 increase in the profit. Problem 5 Farmer Leary grows wheat and corn in his 45-acre farm. He can sell at most 140 bushels of wheat and 120 bushels of corn. Each planted acre yields either 5 bushels of wheat or 4 bushels of corn. Wheat sells for $30 per bushel, and corn sells for $50 per bushel. Six hours of labor are needed to harvest an acre of wheat, and ten hours are needed to harvest an acre of corn. As many as 350 hours of labor are available. a) Formulate the problem as an LP model, and solve it graphically. b) What is the range of wheat and corn prices that will keep the current basis optimal? c) What is the range of land and labor availability that will keep the current basis optimal? Solution: a) Decision Variables 𝑥𝑤 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑎𝑐𝑟𝑒𝑠 𝑝𝑙𝑎𝑛𝑡𝑒𝑑 𝑤𝑖𝑡ℎ 𝑤ℎ𝑒𝑎𝑡 𝑥𝑐 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑎𝑐𝑟𝑒𝑠 𝑝𝑙𝑎𝑛𝑡𝑒𝑑 𝑤𝑖𝑡ℎ 𝑐𝑜𝑟𝑛 Constraints 5𝑥𝑤 ≤ 140 Demand limit for wheat 4𝑥𝑐 ≤ 120 Demand limit for corn 𝑥𝑤 + 𝑥𝑐 ≤ 45 Land availability 6𝑥𝑤 + 10𝑥𝑐 ≤ 350 Labour availability Objective Function max 5 ∗ 30𝑥𝑤 + 4 ∗ 50𝑥𝑐 = 150𝑥𝑤 + 200𝑥𝑐 The graphical solution is shown in the figure below. 50 45 40 35 C2 b 30 25 C1 20 Optimal solution 15 C4 a 10 5 C3 5 10 15 20 25 30 35 40 45 50 55 60 Xw Iso-profit line ∗ 𝑥𝑤 = 25 𝑎𝑐𝑟𝑒 𝑥𝑐∗ = 20 𝑎𝑐𝑟𝑒 𝑧 ∗ = $7,750 b) The current solution will remain optimal as long as the slope of the iso-profit line remains between the slopes of the binding constraints> For wheat: 3 𝑐 1 − ≥− 𝑤 ≥− or 120 ≤ 𝑐𝑤 ≤ 200 , so the price of wheat can range between 24 and 40 5 200 1 $/bushel without changing the optimal solution. For corn: 3 5 − ≥− 150 𝑐𝑐 ≥− 1 1 or 150 ≤ 𝑐𝑐 ≤ 250 , so the price of corn can range between 37.5 and 62.5 $/bushel without changing the optimal solution. c) Land availability: C3 can shift up parallel to itself without changing the basis until it hits the intersection of C1 and C4. This point is (28,18.2). The RHS of C3 at this point is 46.2 acre. C3 can shift down parallel to itself without changing the basis until it hits the intersection of C2 and C4. This point is (8.33,30). The RHS of C3 at this point is 38.33 acre. Labor availability: C4 can shift up parallel to itself without changing the basis until it hits the intersection of C2 and C3. This point is (15,30). The RHS of C4 at this point is 390 hours. C4 can shift down parallel to itself without changing the basis until it hits the intersection of C1 and C3. This point is (28,17). The RHS of C4 at this point is 338 hours. Problem 6 An investor has 4 investment alternatives for the next year: Gold (G), Certificates of deposit (C), Bonds (B) and Stocks (S). The expected return and the risk score for these investments are shown below: Investment type G C B S Expected return 2% 4% 6% 10% Risk score 2 1 3 6 Available funds are to be allocated between the different alternatives such that: 1. expected total return (i.e., profit) is at least 5% 2. average risk score does not exceed 2 3. at most half of the funds are invested in fixed income instruments (C and B) Formulate an LP to determine the optimal investment plan. Solve it using Excel. If possible without resolving the problem, use the sensitivity report to answer the following questions: a. If the expected return of the certificates of deposit falls by 0.1%, how would this affect the allocation and the profit? b. What is the minimum expected return of bonds that justifies buying them? c. If the investment policy requires at least 5% of the investment to be held as gold, how would that affect the profit? d. The return of stocks is known to be volatile and may fall short of expectations. What is the minimum expected return of stocks such that the current allocation is not changed? e. If the investor’s risk tolerance becomes higher such that a risk score of 3 is now acceptable, how would that affect the allocation and the profit? f. The investor lowers its acceptable profit threshold to 4.5%. How would this affect the allocation and the profit? g. The minimum percentage of funds to be invested in fixed income instruments in increased to 75%. How would this affect the allocation and the profit? h. A new assessment reached a conclusion that gold is no longer a suitable alternative for this investor. How would that affect the allocation? i. A new investment product called “leveraged ETF” is introduced, which has a risk score of 8. What is the minimum expected return that justifies allocating funds to this product? Solution Let G, C, B and S be the percentages of funds invested in gold, certificates of deposit, bonds and stocks, respectively. The problem can be formulated as: Max z= 2G+4C+6B+10S s.t 2G+4C+6B+10S>=5 (expected return) 2G+ C+3B+ 6S<=2 (risk score) C+ B >=0.5 (minimum fixed income investment ratio) G+ C + B+ S = 1 (all funds are invested) G, C, B, S >=0 The optimal solution is: G=0, C=0.8, B=0, S=0.2, z=5.2% Below is the Excel sensitivity report Final Reduced Objective Allowable Allowable Cell Name Value Cost Coefficient Increase Decrease $B$2 G 0 -3.2 2 3.2 1E+30 $C$2 C 0.8 0 4 6 0.666666667 $D$2 B 0 -0.4 6 0.4 1E+30 $E$2 S 0.2 0 10 1E+30 1 Cell $F$3 $F$4 $F$5 $F$6 Final Name Value LHS 5.2 LHS 2 LHS 0.8 LHS 1 Shadow Price 0 1.2 0 2.8 Constraint Allowable Allowable R.H. Side Increase Decrease 5 0.2 1E+30 2 1.5 0.166666667 0.5 0.3 1E+30 1 1 0.071428571 a. The decrease in C coefficient is within the allowable limit (<0.66667), so the optimal solution (allocation) will not change, but the profit will decrease by 0.1*0.8=0.08 to 5.12% b. The reduced cost of B is -0.4. Thus, the expected return of bonds must increase by 0.4 to 6.4% for B to enter the basis (assume a non-zero value, add bonds to the portfolio) c. A unit of G forced into the optimal solution will lower the objective value by the reduced cost of G. Thus, a 0.05 units will cause a decline of 0.05*3.2= 0.16% in profit to 5.04% d. Allowable decrease in the coefficient of S is 1. Thus, the minimum expected return of stocks so the current allocation remains optimal is 10-1=9% e. The shadow price of the risk score constraint is +1.2 and the allowable increase is 1.5>=1. So, if the RHS of this constraint increases by one unit the objective function improves by 1.2 units to 5.2+1.2=6.4%. The basis (and hence the binding constraints) remain unchanged. Solving the equations: C+6S=3 and C+S=1 yields: C=0.6 and S=0.4, so z=4(0.6)+10(0.4)=6.4% (the same result found using the shadow price) f. The expected return constraint is non-binding and the allowable decrease in the RHS is infinite. Thus, no effect on the allocation or profit will result from this change in parameters. g. The allowable increase in the RHS of constraint 3 without changing the basis is 0.3 which is greater than 0.25. This is a non-binding constraint and will remain non-binding, thus nothing will change. h. This change is equivalent to adding the constraint G=0. Since the current optimal solution does not violate the new constraint, it has no effect on the allocation or profit. i. Adding a unit of the new product reduces the RHS of constraint 2 by 8 units and constraint 4 by 1 unit, meaning it reduces the profit by 8(1.2)+1(2.8)=12.4%. This is the minimum expected profit that justifies adding the new product to the portfolio. |
This critical review describes the confused application of significance tests in environmental toxicology and chemistry that often produces incorrect inferences and indefensible regulatory decisions. Following a brief review of statistical testing theory, nine recommendations are put forward. The first is that confidence intervals be used instead of hypothesis tests whenever possible. The remaining recommendations are relevant if hypothesis tests are used. They are as follows: Define and justify Type I and II error rates a priori; set and justify an effect size a priori; do not confuse p(E | H0) and p(H0 | E); design tests permitting Positive Predictive Value estimation; publish negative results; estimate a priori, not post hoc, power; as warranted by study goals, favor null hypotheses that are not conventional nil hypotheses; and avoid definitive inferences from isolated tests.
Scientists use accepted methods to generate new information, which is then organized around explanations. What constitutes accepted methods or favored explanations changes as experience and insight grow. It follows that all healthy sciences, including environmental toxicology and chemistry, require periodic review and revision of their practices and paradigms.
The central role of significance testing in assessing evidence suggests that associated statistical methods deserve critical evaluation. That is the specific goal of this review. Fundamental changes occurring in the application of statistics in health science and epidemiology [1–6], socioeconomics [7–9], psychology [10,11], and ecology [12,13] will be brought forward as being pertinent to environmental sciences. The overarching premise during this review is that significance tests should effectively guide rational transformation of observations into knowledge about how contaminants act in or affect our environment.
Significance testing, especially null hypothesis–based significance testing, is arguably one of the most common ways in which scientific inferences are made by environmental chemists and toxicologists. Yet, its prominence and the unanimity about its soundness emerge more from custom than from scrutiny. The initial disagreements of Fisher, Pearson, and Neyman about key features remain unresolved and imperfectly integrated into present-day applications . Common misinterpretations about the exact meanings [2,10,11,14–16] of Type I and II error rates confuse inferences, including those directly germane to regulatory activities . Biologically trivial effects that are statistically significant are given unwarranted attention , and biologically crucial effect sizes are ignored . Publication bias confuses literature interpretation, meta-analysis, and estimation of prior probabilities . Realization of these shortcomings in what Gigerenzer calls statistical rituals recently prompted fundamental shifts in other sciences, including discouragement or outright prohibition of significance testing in prominent medical , psychology , and conservation biology journals.
What is p?
No simple answer, save a careless one, exists for this question. The two most common explanations emphasize belief or relative frequency of occurrence. The original Bayesian context was that probability (p) suggests plausibility: p informs an investigator so that his or her degree of belief in a hypothesis can be adjusted based on evidence. For example, a forecast of a 95% chance of a blizzard suggests that one ought to remain home. One's degree of belief or level of certainty changes as evidence accumulates and is used to generate p. Attempting to steer clear of Bayesian subjectivity, frequentists treat p as a probability for an observable event, outcome, or state, such as a 50% chance in the long run of heads resulting from fair coin tosses. One's state of belief about a certain hypothesis is irrelevant to the frequentist .
What does a “significant” test mean?
Again, no single answer exists. A set of contrasting answers, however, is commonly presented based on the pioneering works of Bayes, Fisher, Neyman, and Pearson.
Fisher discarded the Bayesian vantage as being too subjective and dependent on uncertain prior probabilities. He established significance testing as a more objective inferential approach. Strongly influenced by Popper's logic of falsification, Fisher asserted that sufficiently improbable events can be considered impossible: Statistical methods facilitate “practical falsification” or pseudofalsification [18–20]. A p value associated with a particular test statistic suggests whether a null hypothesis is sufficiently improbable to be considered practically falsified in the sense of a logical refutation . The p value is the probability of getting the observed (e.g., 8 heads in 10 coin tosses) or more extreme outcomes (e.g., 9 or 10 heads in 10 tosses) under a null hypothesis . Fisher did not advocate dogmatic application of any particular threshold p, although at one point, he did suggest the 0.05 convention as one of several standards for evidential strength [2,3,16]. He explained that a p value of 0.05 or less might be appropriate to make an inference in some instances but, as dictated by the researcher's understanding or goals, might only suggest the need for further experimentation in others. The prior probabilities of the Bayesian approach might be acceptable to Fisher only if derived in a clear, rigorous, and defensible manner . The obvious fact that improbable events do occur remained a major shortcoming of Fisher's position. Sacrificing some objectivity, Popper countered that improbable outcomes in Fisher's approach might be better thought of as not being reproducible at will.
Neyman and Pearson judged Fisher's vantage to be untenable , introducing in its stead hypothesis testing, which defines primary and rival (alternate) hypotheses. Error rates are established for falsely deciding to reject the primary hypothesis (Type I error rate of α) or alternate hypothes(es) (Type 11 error rate of β). The seriousness of each error type determined which hypothesis was the primary hypothesis (i.e., that for which a decision error would have the most serious consequences) and the values for a and β . The most serious error type determines the primary hypothesis, because selection of the most appropriate test statistic is based on the assumption that that hypothesis is true.
When the errors can be distinguished by their gravity, the more serious of them is normally called a Type I error… suppose two alternative theories concerning a food additive were entertained, one that the substance is safe, the other that it is highly toxic… it would be less of a danger to assume that a safe additive was toxic than that a toxic one was safe .
Arguably, the most serious consequences if studying adverse effects most often would be associated with falsely deciding that an unsafe compound is safe; therefore, the null hypothesis should be that it is unsafe and the alternate hypothesis that it is safe. Oddly, the opposite is the more common practice. For example, a Dunnett's test might be applied to a sublethal effects data set to test the means of five nonreference treatments relative to a reference treatment mean under the null hypothesis that no difference exists. Adopting Fisher's terminology, the hypothesis associated with a was called the null hypothesis . An effect size (ES) also is defined (i.e., what constitutes a meaningful effect in any particular test). For example, the ES might be a 25% decrease in reproduction in the above sublethal effects test if a toxicant-induced decrease in reproduction of that size would result in local extinction of a wild population. A test critical region is then established, and the observation-derived test statistic is compared to that region in a way that minimizes the chance of exceeding the specified error rates. Unlike Fisher's significance testing of a hypothesis, two hypotheses are incorporated, and the rates of each of the two error types are defined a priori based on judgment. The a and (3 are decision error rates associated with a particular test or experiment; they are not thresholds for deciding whether a hypothesis is plausible . Unlike Fisher's approach, the Neyman-Pearson approach aims only to guide future behavior about the proposed hypotheses (i.e., to act as if one or another hypothesis were true), not to infer from the experiment that a null hypothesis was falsified [2,16]. According to the Neyman–Pearson line of reasoning, you are more likely to be correct in the long run if you behave toward a hypothesis in the manner suggested by the test results. A shortcoming of this context is that the in-the-long-run condition of such testing is a fiction relative to actual scientific inquiry and decision making.
Fisher's approach focuses on inductive inference about a single hypothesis using pseudofalsification, whereas the Neyman-Pearson approach informs future behavior based on a test using two complementary hypotheses, associated decision error rates, and a specified ES. Both fail to completely avoid the subjectivity of Bayesian methods. Objective criteria are not possible for identifying a sufficiently improbable p value in Fisher's significance testing or for choosing the right combination of primary hypothesis, decision error rates, and ES in Neyman-Pearson hypothesis testing . Depending on the test statistics applied to the same data, Fisher's null hypothesis might or might not be rejected. An objective way of defining how favorable a Neyman-Pearson hypothesis test result is relative to behaving as if a hypothesis were true is not congruent with the common practice of categorizing results with conventions such as accepted/rejected at α = 0.05 or the “roving α” classification of results as nonsignificant (p > 0.05), significant (0.01 < p ≤ 0.05), or highly significant (p ≤ 0.01).
The Bayesian approach dominated statistical thinking before Fisher, Neyman, and Pearson but was pushed aside in the 1920s as being too subjective. Bayesian methods currently enjoy much wider acceptance, primarily because the subjectivity in all approaches is more widely appreciated but also because convenient software now exists for its implementation. The Bayesian approach uses probabilities to gauge the belief in a particular hypothesis warranted by evidence; for example,
where p(H1 | E) is the posterior probability of the hypothesis (H1) given the evidence or data (E), p(H1) is the probability of H1 prior to considering E, p(E | H1) is the probability of getting E if H1 is true, and p(E) is the probability of E regardless of whether H1 is true. In this simplest form of Bayes' theorem, the prior probability (e.g., p(H1)) is combined with a normalized likelihood (p(E | H1)/p(E)) to estimate a posterior probability of a hypothesis based on the evidence (e.g., p(H1 | E)). Here, the likelihood of the evidence given that the hypothesis is true, p(E | H1), is normalized to p(E). As new evidence is gathered, the posterior p can be used as a prior p to produce a new posterior p. Several alternate hypotheses (n − 1) can be included in these Bayesian calculations to infer the degree of belief in a hypothesis (H1) as warranted by evidence:
where Σ p(Hn) sums to one. In this case, the prior p for H1 is multiplied by the likelihood of the evidence given H1 divided by the sum of the prior p for each Hi times the corresponding p for the evidence given a particular Hi. (The reader should note that Bayes factors allow much more involved comparisons of competing models than illustrated here.) So p in Bayesian inference methods reflects an evidence-based belief in a particular hypothesis (among a specified set of hypotheses); for example, p(Fish kill | Copper discharge) = 0.98 warrants high, but not absolute, confidence that a fish kill will occur if a copper exposure of the specified qualities occurs.
SIGNIFICANCE TESTING PROBLEMS
That confusion exists about significance tests is unsurprising given both how recently this testing convention became established and the different interpretations of p. Emerging consensus from several sciences is that the resulting significance test malpractice now impedes as much as fosters progress [2,3,5,6,8-13,16].
One major difficulty involves the inconsistent combining of elements of the Fisher and Neyman-Pearson approaches into what Cohen [10,11] calls the “usual reject-H0-confirm-the-theory” approach. The following example using sublethal effects testing of a novel class of compounds is typical. For each in a class of new compounds, a series of concentration treatments is established within an experimental design prescribed by regulatory guidance or published by a reputable scientist. Observations of potential effect are taken and a Dunnett's test applied with a null hypothesis of no difference from a reference treatment (α = 0.05). Next, the test statistic p value for each concentration treatment is used to classify that treatment concentration as either having or not having an adverse effect. The final research report does not discuss those compounds for which no significant adverse effect was noted in any treatment, because failure to reject the null hypothesis could have resulted from inadequate experimental design. The researcher decides to repeat those nonsignificant tests at some later date. The following problems emerge in this approach.
First, a misinterpretation of Neyman-Pearson hypothesis tests appears to be based on the Fisherian context. The a is one of two conditional probabilities of making a decision error during a specific hypothesis test, not a metric allowing one to decide if the null hypothesis is true. The Neyman-Pearson vantage cannot be taken to decide to act as if an effect exists, because two a priori decision rates were not established. On the other hand, no alternate hypothesis of an effect would exist if Fisher's vantage were taken. Second, the strict rejected versus not rejected interpretation of results is based merely on an arbitrarily selected convention (p ≤ 0.05). Third, a pervasive misinterpretation exists that a low p value associated with the primary hypothesis (e.g., 0.04) indicates a high p of the secondary hypothesis being true (e.g., perhaps 0.96). Fourth, a pervasive inattention to power (1 – β) is present despite its essential role in Neyman-Pearson hypothesis testing. Fifth, judgment was not applied a priori to select the most appropriate Type I and II error rates. Sixth, a pervasive preoccupation with statistical significance and inattention to ES exists, including a failure to establish ES a priori. Seventh, the conventional no-effect (nil) hypothesis approach is applied such that the obligation to generate the most meaningful or discerning alternate hypotheses is ignored. Finally, a tendency exists to publish significant results more readily than nonsignificant results or to expand a study until a significant result is found based, incorrectly, on Fisher's pseudofalsification context of significance testing.
Although Fisher intended p to be a flexible inferential tool for rejection of a specified hypothesis and Neyman and Pearson intended p (to be assessed relative to the α decision error rate in obligatory combination with β and ES) to dictate behavior toward primary and alternate hypotheses, p values of less than 0.05 commonly are used to definitively reject a null hypothesis and to infer that the alternate hypothesis is true. For example, it is usual to conclude, using α = 0.05 from a conventional sublethal effect test, that an effect exists at a treatment concentration, because that treatment's mean response was statistically significantly different from that of the reference mean. The false assertion that improbability of a primary hypothesis inferred from a p value means that the alternate hypothesis is probable is so prevalent that it has a name, the inverse probability error [3,10,11]. In actuality, a “p value substantially overstates the evidence against a null hypothesis” . During the application of Neyman-Pearson hypothesis testing, it is likely that no or little time was spent balancing Type I and II error rates or determining what constituted a meaningful ES. The most important decision error might be unduly trivialized or a toxicologically trivial, but statistically significant, effect elevated to the status of publishable finding . Finally, the effect level (e.g., lowest-observed-effect concentration and associated no-observed-effect concentration) is approximated from a single test, and future testing is implied to be unnecessary. Results are not treated as conditional evidence subject to change as more evidence accrues.
These are the features of the presently confused blending of the decision-based approach of Neyman and Pearson with Fisher's context of pseudofalsification to produce what Ziliak and McCloskey call mechanical testing. Gigerenzer suggests that mechanical application of the “null ritual” is perpetuated by risk aversion associated with picking the wrong statistical tool from a diverse toolbox.
Awareness of the origins of the [null] ritual and of its rejection could cause a virulent cognitive dissonance, in addition to dissonance with editors, reviewers, and dear colleagues. Suppression of conflicts and contradicting information is in the very nature of this social ritual .
Cognitive dissonance aside, evolving best practices are essential to the health of any science. Nine changes in current practices that can reduce some of these problems are suggested below.
Define and justify Type I and II error rates
The recent convention of applying an α of 0.05 in combination with an unspecified β and ES is inappropriate . Hypothesis test α and β are chosen based on the seriousness of making each decision error, yet recent custom abrogates such judgments. Fixing α but allowing β to range within ill-defined limits set by experimental design and data variability implies that only one decision error is truly crucial (i.e., Fisher's vantage for judging the plausibility of a single hypothesis).
Quotients are convenient tools to balance the relative seriousness of the two decision errors . Pairing error rates of α = 0.05 and β = 0.2 implies that the consequences of making a Type I error is fourfold more serious than that of a Type II error, because α/β = 0.25 = 1/4. Selecting α = β = 0.05 for a toxicity test indicates that the seriousness of falsely rejecting the hypothesis of no effect is the same as that of falsely rejecting the hypothesis of an effect. The majority of sublethal effect tests fix α at 0.05 and, by virtue of standard design, produce β in the range of 0.2 (assuming an ES of ˜20-30%) . This creates the debatable default position that the consequences of Type I error (i.e., falsely rejecting the hypothesis of no toxic effect) are fourfold more serious than those of a Type II error (i.e., falsely rejecting the hypothesis of a toxic effect). Avoiding judgment does not eliminate decision error consequences: It simply obscures them, resulting in compromised judgments about the need for future scrutiny.
Define and justify the test ES
A p value is an unreliable indicator of whether a decision is being made about a meaningful effect—about what McCloskey calls the hypothesis test's “oomph” . As an extreme illustration, if the size of an effect is treated as being irrelevant, a null hypothesis of no difference will always be rejected given enough observations. Establishing a test without defining a meaningful ES is inherently misleading. A hypothesis test should be designed with an a priori ES based on sound insight and knowledge [9–11].
First consider using ES confidence limits
Arguments to replace hypothesis testing with presentations of confidence limits are increasing as a consequence of the confusion surrounding ES, p, and error rates [17,25,26]. For 25 years, several key human health science journals have depended increasingly on confidence intervals to convey ES, precision, and statistical significance simultaneously .
It is important to note that the 95% value for confidence intervals, like the Type I error rate of 0.05, is a convention and that other values might be more appropriate depending on circumstances or goals. Regardless of which percentage is selected, care should be taken when interpreting intervals. For example, a 95% confidence interval defines the interval x̄ − tn−1,95SE ≤ μ ≤ x̄ + tn−1,95SE, where SE is the standard error. If one were to generate many such intervals, 95% of the intervals would contain μ in the long run. It is incorrect to state the probability is 0.95 that a particular interval includes μ.
Cumming and Finch suggest the following three general rules for confidence interval presentation: Select error bars associated directly with the relevant effect, presentation should be sensitive to the experimental design, and the confidence intervals should be thoroughly interpreted. More guidance can be found in Cummings and Finch and in Di Stefano for applying confidence intervals to a range of common situations. Altman et al. describe detailed applications of confidence interval techniques, including those dealing with means, medians, proportions, regression analysis, time-to-event studies, and meta-analyses. Altman et al. also provide convenient software to facilitate implementation of these methods. The SAS® software package also has procedures (e.g., INTERVALS option in the PROC CAPABILITY) that make calculations convenient for a wide range of analyses.
Do not confuse p(E | H0) and p(H0 | E)
This point can be introduced with an old joke. Walking down a city street, a woman passes a man who is jumping and waving his arms wildly. She asks him why he's doing this, and he responds, “It scares away elephants.” To her retort that there are no elephants in the city, the man exclaims, “You see. It works!” Put in more explicit, but equally absurd, terms, p(No Elephants | Behavior Scares Elephants) = p(Behavior Scares Elephants | No Elephants). Obviously, knowledge of other probabilities, such as p(Elephants), is required to judge the soundness of the gentleman's hypothesis.
Most conventional applications of null-hypothesis significance tests generate test statistics associated with the probability of getting the data or evidence if the null hypothesis is true (i.e., p(E | H0)). Therefore, rejection of H0 reflects the chance of getting the data if H0 is true, not how likely it is that H0 is true given the data (i.e., p(H0 | E)). The distinctness of p(E | H0) and p(H0 | E) is obvious from Bayes' theorem above. More information (p(E) and p(H0)) than provided by the hypothesis test is needed to estimate the probability of H0 given E. Continuing with the previous example of applying Dunnett's test to sublethal effects test data, rejection of the null hypothesis of equal means for the reference and a toxicant-spiked treatment does not lead directly to the conclusion that a sublethal effect exists at the treatment concentration. It indicates only that the observations have a low probability in the long run of having occurred if the null hypothesis is true.
Design tests allowing estimation of Positive Predictive Value
How does one estimate the probability of an alternate hypothesis being true given a significant hypothesis test? An estimate of this probability is the Positive Predictive Value (PPV):
where R is the ratio of “true relationships” to “no relationships” estimated prior to testing [4,5]. Calculation of PPV from the above equation requires informed estimation of R and judgment about the appropriate α and β based on the seriousness of making decision errors and ES. Otherwise, the probability cannot be established for this hypothesis being true given a positive test. The related probability that the null hypothesis is true given a significant test (False Positive Result Probability [FPRP]) is the following :
where π is the prior probability of association between treatment and effect (i.e., R/(R + 1)).
The previous example of a hypothetical sublethal effect data set evaluated with a one-way Dunnett's test can be used to illustrate that a test's Type I error rate of 0.05 is not a reliable indicator of PPV. Assume for purposes of illustration that most tests have five nonreference treatments and that most toxicologists design experiments so that the lowest-observed-effect concentration is one of the middle treatments. Then, R would be two or three significant treatments of a total of five treatments (i.e., 2/5 or 3/5). Also, let α and β be 0.05 and 0.2 , respectively:
The common assumption that p is minimally 1 – α or 0.95 that an effect exists at a treatment concentration given a statistically significant test is clearly wrong. Here, 9 in 10 would be a better estimate than the presupposed 19 in 20, or better, chance. Similarly, it is untrue that 0.05 reflects the probability that the null hypothesis is true given a significant test (i.e., FPRP). The FPRP ranges in this example from 0.09 to 0.14, opening up the question of how small the FPRP, or the large PPV, must be to make a decision from this type of sublethal effect testing. The situation worsens for studies with higher β values, such as mesocosm and epidemiology studies. Epidemiology studies reviewed by Wacholder et al. had typical R and β values resulting in a PPV of 0.5. A statistically significant result in one of the reviewed epidemiology studies had only a 50:50 chance of correctly indicating a true effect. Equally pessimistic were Kraufvelin's comments about tests of mesocosm data .
It is recommended that the information needed to estimate PPV or FPRP be generated and included in discussions of environmental chemistry or toxicology studies.
Publish negative results
Studies showing no significant effects are judged to be of ambiguous value based on the historical pseudofalsification vantage point. The common practice of not setting or reporting Type II error rates should lead to caution when interpreting such studies, although similar caution, oddly, is not practiced when interpreting tests with significant effects.
Publication bias has two undesirable consequences. The underlying goal associated with most testing is to understand PPV or FPRP. Publication bias makes the associated estimation of R or π inaccurate. This compromises inferences from the literature, although methods for coping with this bias do exist . A more subtle effect emerges as this publication bias combines with the publication time-lag bias. Ioannidis and Trikalinos observe that the initial literature for a new research theme tends to have more reported significant studies than reported nonsignificant studies. Reports of significant findings tend to be more contrasting if emphasis is placed on Type I error rates. Researchers are attracted to contrasting reports, so these kinds of studies are more likely to catch the attention of editors and move quickly into publication. So, the preoccupation with Type I error rate and the neglect of PPV initially seeds the literature with contrasting significant studies. Ioannidis and Trikalinos define the consequent Proteus phenomenon as being the appearance of highly contrasting studies during the onset of any new research theme, followed by a gradual movement toward more consistency among reports. The debates associated with the Proteus phenomenon can impede initial progress in a new area of research.
Estimate a priori, not post hoc, power
Test power (1 – β) is extremely important to define, because failure to reject H0 might reflect either insufficient power or the high probability of the observations (i.e., the common nonrejected-null-hypothesis dilemma) . Consequently, well-intended journals and agencies request inappropriate post hoc estimates of observed test power to suggest the reason why H0 was not rejected. Recalling the core roles of β, α, and ES in hypothesis testing, however, power makes sense only if established a priori. Unfortunately, the requirement of a pilot study or critical literature analysis seems to foster avoidance of a priori power estimation in favor of post hoc power estimation.
Hoenig and Heisey argue forcefully against post hoc power estimation from observed test statistics. They describe the power approach paradox in which it is wrong to assume that the nonsignificant H0 for a test with high power is more likely to be true than that for a second nonsignificant H0 with lower associated power. Observed power adds no insight, because it is determined by the test's p value, which can vary widely for the two nonsignificant tests. Equally unhelpful are post hoc estimates of minimum significant difference or detectable ES. Instead, Hoenig and Heisey recommend inference from confidence intervals: “Once we have constructed a confidence interval, power calculations yield no additional insights” . This is consistent with the third change suggested above.
Use null, not nil, hypotheses
Emerging from Fisher's initial vantage and the “usual reject-H0-confirm-the-theory” approach [10,11], a bad habit of automatically using a hypothesis of no difference or correlation as the null hypothesis has become entrenched. Cohen refers to this as the nil hypothesis approach, which stipulates an ES of zero and misinterprets Fisher's term “null” to mean “zero” instead of “to be nullified.” As already mentioned, an ES of zero can always by rejected given enough observations, so this approach lacks merit as a reliable tool for informing decisions. It also is inconsistent with the Neyman-Pearson context of hypothesis testing, which informs decisions to act as if one or another of two (or more) hypotheses is true.
Null hypotheses should be established based on sound judgment. For example, a H0 of the decrease in reproductive output is more than 25% under a certain exposure regime might be based on the demographic insight that the species population likely would go locally extinct if output dropped by more than 25%. This kind of null hypothesis selection and testing requires more thoughtfulness about decision error consequences and about error rate and ES magnitudes, but it rewards such effort by producing much more meaningful results [10,14,16].
Avoid definitive inferences from isolated tests
A review of the above materials should suggest that a single hypothesis test rarely is as useful as a series of inferentially linked experiments and associated tests. As evidence accumulates, the PPV or FPRP changes based on the changes to R or π. The most effective inferences emerge from carefully planned research programs or themes .
ENVIRONMENTAL CHEMISTRY AND TOXICOLOGY
How does the environmental chemistry and toxicology literature stand up to the issues presented above? Ten representative journals with good impact factors were reviewed to suggest an initial answer: Aquatic Toxicology, Archives of Environmental Contamination and Toxicology, Chemosphere, Ecotoxicology, Ecotoxicology and Environmental Safety, Environmental Pollution, Environmental Science and Technology, Environmental Toxicology and Chemistry, Marine Pollution Bulletin, and The Science of The Total Environment. For each journal, a random number generator was used to pick the volume and then the article number for 10 articles published between 1996 and 2006 inclusive. Features of each article were scored (Yes, No, or Not Applicable) as summarized below and in Figure 1. Ninety-seven of the 100 surveyed papers applied quantitative methods amenable to hypothesis testing. The 95% confidence intervals shown in parentheses were produced with the Wilson method from frequencies first estimated with the SAS 9.1 software package PROC FREQ.
In 57% (47-67) of the 97 quantitative publications, inferences were based on hypothesis testing. Notably, many of the surveyed environmental chemistry publications presented results graphically and compared them to predictions from theories instead of relying heavily on hypothesis testing. This 57% is lower than the percentage noted in a 2005 survey of two ecology journals (Ecology and Journal of Ecology) and two conservation biology journals (Conservation Biology and Biological Conservation) . Most of the publications using hypothesis tests applied the nil hypothesis. This nil hypothesis use rate was lower than that in the ecology/conservation biology survey, because the base rate of hypothesis testing also was lower, as just stated.
In 62% (49-74) of the publications employing hypothesis tests, test results were treated in a dichotomous significant-or-not-significant manner based on α = 0.05. A set a chosen by convention before the test is incongruent with Fisher's context of significance testing. A priori selection of an a, but not a β, is inconsistent with the Neyman-Pearson vantage. As discussed already, this use of p values can lead to considerable confusion. Another 38% (26-51) of the publications used p values to categorize results within schema, such as not significant, significant, very significant, or highly significant. Such use is specifically incompatible with the Neyman-Pearson framework in which α and β are established a priori and p has no meaning outside the decision error context. In that context, either the null hypothesis or the alternate hypothesis is rejected with the specified error rates. Such use with an inferred alternate hypothesis is inconsistent with Fisher's vantage. The 2005 survey of ecology and conservation biology journals showed similar levels of use for such schema.
No publication using hypothesis testing reported calculating power a priori, and a low 4% (1-13) of publications discussed power issues in qualitative terms only. Power or some metric of minimum ES was estimated post hoc in 6% (2-16) of the publications employing hypothesis testing. The 2005 survey of ecology and conservation biology journals also had extremely low percentage reporting of power.
Confidence intervals were used in some manner to make inferences in only 16% (12-29) of the quantitative environmental chemistry and toxicology publications. The 2005 ecology and conservation biology journal survey reported only a slightly higher level of confidence interval use. These percentages are well below those of the British Medical Journal, which after editorial policy changes increased from 4% (1977) to 62% (1994) . Similarly, the American Journal of Epidemiology had a 70% confidence interval use rate in 1990. More engagement of statistical editors, as done for the British Medical Journal, might improve this situation in environmental chemistry and toxicology journals.
Relative to alternate approaches, only 3% (1-9) of the quantitative publications applied information theory-based approaches. None used Bayesian methods.
Quantitative results were analyzed in relative isolation from other experiments in 84% (72-91) of the surveyed studies. The exceptions included those compiling large toxicological data sets. No study estimated PPV or FPRP.
Generally, applications of hypothesis testing in environmental toxicology and chemistry were similar to those in other environmental sciences. The results for ecology and conservation biology discussed above led Fidler et al. to conclude that “further efforts are clearly required to move the discipline toward improved practices.” The same conclusion seems relevant to environmental chemistry and toxicology.
Unquestionably, hypothesis testing is a major tool that is misapplied in many fields. Interpretation of p values is confused (e.g., the reject-or-accept nil hypothesis routine based on 0.05). Power is ignored or given short shrift, being applied post hoc incorrectly in most of the few studies that give it attention. None of the surveyed environmental toxicology or chemistry publications set α, β, and ES a priori or attempted to estimate R or π from the literature. Therefore, the PPV or FPRP could not be estimated from results. The value of calculating PPV in environmental health risk was clearly illustrated in a study by Rizak and Hrudey , in which water-quality professionals were presented with a hypothetical detection of a pesticide. Not understanding the value of calculating PPV, most reported high certainty (80-100% chance) of the pesticide being present when, in fact, a low chance (5%) existed. To end on a positive note, however, 16% of the surveyed studies (particularly environmental chemistry studies) used confidence intervals effectively to assess results.
CONCLUSIONS ABOUT IMPROVING STATISTICAL INFERENCE
Two general recommendations suggest themselves for immediate implementation based on the materials summarized above. First, any interpretation of hypothesis testing as currently practiced should explicitly address any relevant test shortcomings and not extend inferences beyond those limits. Second, the teaching of statistics to environmental science students should shift away from a traditional emphasis on hypothesis testing to a more flexible approach embracing other valuable vantages, especially the Bayesian and information theory-based vantages.
Nine specific recommendations also are offered. The first is that confidence intervals be used instead of hypothesis tests whenever possible. Other alternate methods include Bayesian and information theory-based techniques. If hypothesis testing is done, the following eight recommendations are made: Define and justify Type I and II error rates a priori; define and justify an ES a priori; do not confuse p(E | H0) and p(H0 | E) during interpretation of results; design tests to allow estimation of PPV; publish negative results; estimate a priori, not post hoc, power; avoid nil hypotheses as much as reasonable; and avoid definitive inferences from isolated tests.
Thanks are due to the editor-in-chief for allowing me to carefully explore this issue and understand more clearly my many past errors. The author is very grateful to John Carriger, Mark Crane, Philip Dixon, Margaret Mulvey, and two anonymous reviewers for their invaluable comments on earlier versions of this manuscript. |
Employee drug testing is done in the following manner. An employee is chosen at random and a urine specimen is taken. The specimen is divided into three vials and each vial is sent to an independent lab. If each lab has a 0.97 probability of detecting illegal drugs when they are present, what is the probability that all three la
You have a standard deck of cards and you take out 2 cards at random. Let Y1 represent the number of red Queens in your sample and let Y2 represent the number of spades in your sample. a) calculate p. it is mean p(Y1, Y2); b) calculate var (Y1I Y2=1) c) Determine E (Y2I Y1=0)
** Please see the attached file for a Word formatted copy of the problem ** 1. Consider the joint pmf p(x,y)=cxy, 1<x<y<3. a) Find the normalizing constant c. b) Are X and Y independent? Prove your claim. c) Find the expectations of X, Y, XY. 2. Conceptual Suppose (X, Y) have the joint pmf p(x, x+1) = 1/(n+1), x= 0,1,2
Let Y take on the values 1, 2....., n; all of these values ov Y are equally likely. this probability distribution is called the discrete uniform distribution. a) Derive the formula for the expected value Y, b) Derive the formula for the moment generating function of Y
1. Assume that a procedure yields a binomial distribution with a trial repeated n times. Use a binomial probabilities table to find the probability of x successes given that probability p of successes on a given trial when: n=2, X=0, and p=.90 2. Three cards selected from a standard 52 card deck without replacement. Th
1) A box contain 90 good times and 10 bad items. Thus the probability of randomly choosing a good item from this box is 0.9. A second box also contains 90 good items and 10 bad items. An item is randomly selected from the second box and is placed in the first box. Does this action increase the likelihood of picking a good item
A jar contains a variety of coins. The coins are distributed as follows: 60 pennies 33 nickels 27 dimes 75 quarters For this distribution, find the probability of randomly selecting the following coins: a) a nickel b) a quarter, a dime or a penny c) two quarters or two dimes, with replacemen
A machining operation at Cleveland Tool Works (Process A) produces small parts, 10% of which are defective. A similar operation (Process B) produces small but unrelated parts. Process B is considered to be in control if it produces no more than 10% defective units. Sampling for Process A consists of selecting 20 units at specifi
Suppose that a 3 of spades is drawn from a deck of cards. Let A = the event that the next card drawn is a heart and B = the event that the next card drawn is an ace. Is the following statement true or false? Events A and B are independent. a. true b. false
Non drinker regular drinker heavy drinker total man 135 45 5 185 woman 187 21 13 221 total 322
See the attached file. 1. Suppose X ~ U[-2, 2]. For what a,b is a+bX~U[0,1]? 2. A city bus is supposed to arrive at a fixed stop at 12:00 noon, but its arrival time is uniformly distributed between 11:57 AM and 12:04 PM. If it has not yet arrived at 12:01 PM, what is the probability that it will arrive by 12:02 PM? 3. The con
1. A volunteer ambulance service handles 0 to 5 service calls on any given day. The probability distribution for the number of service calls is as follows. Number of Service Calls Probability 0 .10 1 .15 2 .30 3 .20 4 .15 5 .10 a. Is this a valid probability distribution? Why or why not. b. What is the probability o
Problem Set 2: Chapter 5, problems 6a, 6b, 24, 26; 6. For a population with a mean of µ =100 and standard deviation of ? = 10, a. Find the z-score for each of the following X values. X = 105 X = 120 X = 130 X = 90 X= 85 X = 60 b. Find the score (X value) that corresponds to each of the following z-scores. z = -1.
The probability in detecting a crack in an airplane wing= probability of inspecting a plane with a wing crack (P1) x probability of inspecting the details in which a crack is located (P2) x probability of detecting the damage (P3) Find the probability of detecting a crack if (P1=.9, P2=.8 & p3=.5)? If 50 planes are inspected
A group of medical professionals is considering the construction of a private clinic. If medical demand is high (i.e. there is a favorable market for the clinic), the physicians could realize a net profit of $100,000. If the market is not favorable, they could lose $40,000. Of course, they don't have to proceed at all, in which
2. A mini license plate for a toy car must consist of three numbers followed by a letter. Each number must be a 1, 3, or 5. Repetition of digits is NOT permitted. Each letter must be an A, B or C. - Use the counting principle to determine the number of points in the sample space. - Construct a tree diagram to represent this
7000 people could qualify for a marathon if they complete the 26-mile plus distance in under 3 hours and 10 minutes. 6350 complete the race. The times are normally distributed and the mean is 3 hours and 40 minutes with a standard deviation of 28 minutes. How many runners qualified?
Scores are normally distributed with a mean of 70 and a standard deviation of 10. The school has decided to place the top 25% into honors English and the bottom 20% into remedial English. What scores separate the upper 25% and lower 20% of the students from the rest? Please explain with solution.
A plays tennis against B. During a given game, the score reaches deuce. Each player then needs to score two more points than the other to win the game. Assuming that each point is independently won by A with probability P, what is the probability they will have to play a total of 2n points to end the game? What is the probabilit
Among the students doing a given course, there are four boys enrolled in the ordinary version of the course, six girls enrolled in the ordinary version of the course, and six boys enrolled in the higher version of the course. How many girls must be enrolled in the higher version of the course if sex and version of the course a
2. A mini license plate for a toy car must consist of a number followed by two letters. Each letter must be a C, A or R. Each number must be a 3 or 7. Repetition of letters is permitted. Use the counting principle to determine the number of points in the sample space. Construct a tree diagram to represent this situation Li
Answer the following questions and show your work. 1. Test the following function to determine whether it is a probability function: P(x) = (x²+5)/80 ; for 1,2,3,4, or 5. 2. A small bag of M&M candies has the following assortment: red(10), blue(2), orange(5), brown(12), green(0), and yellow(8). Give the probability dis
Assume that the mean score on a certain aptitude test across the nation is 100, and that the standard deviation is 20 points. Find the probability that the mean aptitude test score for a randomly selected group of 150 8th graders is between 98 and 102.
A drawer contains 11 identical red socks, and 8 identical black socks. Suppose that you choose 2 socks at random in the dark A) What is the probability that you get a pair of red socks? B) What is the probability that you get a pair of black socks? C) What is the probability that you get two unmatched socks?
7.31 Suppose the age distribution in a city is as follows: Under 18 (22%) 18-25 (18%) 26-50 (36%) 51-65 (10%) over 65 (14%) A researcher is conducting proportionate stratified random sampling with a sample size of 250. Approximately how many people should be sampled from each stratum? Do I use the z formula for sample pr
* Please see the attached file for the graph ** Sec 12.7 If the wheel is spun and each section is equally likely to stop under the pointer, determine the probability that the pointer lands on: 20. a number greater than 6, given that the color is red. 28. Mendel Revisited A pea plant must have exactly one of each of t
Summer Vacation The table shows the results of a survey in which 146 families were asked if they own a computer and if they will be taking a summer vacation this year. a) Find the probability that a randomly selected family is not taking a summer vacation this year. b) Find the probability that a randomly selected family own
1. You are warming up with the dice in the lobby of a casino before making your move to the casino floor. Suppose you have two regular 6-sided dice, with sides numbered from 1 to 6 on each die. What is the probability of rolling them both at once, and getting a sum of TEN? Probability = Round your final answer to two decimal p
See the attached file. ONLY NEED ANSWERS NO WORK NEEDS TO BE SHOWN 1. Suppose that you have two regular 6-sided dice, with sides numbered from 1 to 6 on each die. What is the probability of rolling them both at once, and getting a sum of seven? Express your answer as a decimal and round your final answer to two decimal plac
Components shipped include 0.5% defectives. You plan to select 80 items; if 0 are defective, you will assume all are okay. (Using normal approximation to binomial) a) Find the probability that you will find 0 defectives in 80 items. b) Find the probability you will find none if p = 1.5% |
Internet Relay Chat And Hackers
In this article I will talk about IRC, a very common server software. Ever since my adolescence I've been on and around IRC, it's just something I'll never forget about. Perhaps this is where my offensiveness comes from.
“The past beats inside me like a second heart.”
What is IRC?
IRC is an open protocol that uses TCP and, optionally, TLS. An IRC server can connect to other IRC servers to expand the IRC network. Users access IRC networks by connecting a client to a server. There are many client implementations, such as mIRC, HexChat and irssi, and server implementations, e.g. the original IRCd.
An IRCd, short for Internet Relay Chat daemon, is server software that implements the IRC protocol, enabling people to talk to each other via the Internet.
My go to client for Windows usage would be mIRC, this client only runs on the Windows operating system, so it will not work on Linux.
But if you are on Linux you'd want to pick between irssi or Xchat, these clients also work on Windows and of course there are more options to pick from. It's all about your preferences in terms of simplicity and graphics.
mIRC and Xchat creators
mIRC - Khaled Mardam-Bey, the developer of mIRC. Started working on mIRC in 1994 while studying for a Cognitive Science degree at the University of Westminster in London, where he first learned about the Internet.
XChat is the project of one man, Peter Železný also known as zed. XChat was initially developed as a Unix/Linux GTK application, however XChat now works on Windows too, this was due to being in such a high demand.
Connecting to IRC servers
The largest IRC networks have traditionally been grouped as the "Big Four"
The "Big Four" were:
IRC reached 16 million users in 2001 and 10 million users in 2003.
Here are some of the largest IRC networks:
Latest news about IRC
Internet Relay Chat (IRC) has lost 60% of its users, from 10 million in 2003 to about 400,000 today. In 2003 there were 500,000 channels; now there is half that number. This is mainly due in large part to the advent of the Web, social media platforms, and other software that is more interactive and can do a lot more than plain text can do.
IRC IS NOT DEAD. [THE INTERESTING PART]
So if you may wonder what happens over on IRC servers, right now you'll be able to find all kind of servers ranging from seeking help with your project (e.g Freenode, previously known as Open Projects Network, is an IRC network used to discuss peer-directed projects.) or if you just want to hang out and talk to people you could try (DALnet). Let's not forget iRC servers have been the home for many hackers for a very long time, you could find some old school hackers on UnderNet however many of those guys will run private servers and hide from the world.
“Scars have the strange power to remind us that our past is real.”
1337 THE HACKER COLLECTIVE
The birth of Anonymous itself was sporadic and amorphous. It was created over several years, the beginning was around 2006 on the popular 4chan message board and in Internet Relay Chat channels. The first Anons were in it for the lulz–simple amusement.
Anonymous and its factions LulzSec and AntiSec drew widespread attention between 2008 and 2012 as they tore loudly through the internet ruthlessly hacking websites, exposing corporate secrets, raiding email spools, and joining the fight of the "We are the 99%". The groups appeared to be unstoppable as they attacked one target after another, more than 200 in all by the government's count. It seemed nobody was beyond their grasp.
Just as Anonymous gained mainstream notoriety, however, it seemed to disappear. Little was heard from the group again until 2010, when Anonymous defended the cause of file-sharers with DDoS attacks aimed at the Motion Picture Association of America and others. But the move that really got the group attention was Operation: Payback, a series of DDoS attacks against PayPal, Visa and MasterCard for their refusal to process donations to WikiLeaks after the site began publishing the leaks of Chelsea Manning also Operation Last Resort, which targeted the U.S. Sentencing Commission and MIT websites to protest the unusually harsh prosecution of internet activist Aaron Swartz, Anonymous has gone silent for the most part.
When WikiLeaks drew attention to the DDoS attacks, interest in Anonymous grew exponentially. Participation on the public channel where members and spectators communicated jumped tenfold from 700 to 7,000 people.
I personally remember the AnonOps IRC server which I believe it still exists to this day, They were using Etherpad to keep the participants updated on the current Operations. Etherpad is a web application that allows for real-time group collaboration of text documents.
The group was undone in part by Hector Xavier Monsegur, known online by the nom de hack Sabu.
Check this if you are curious to see all the Ops that were carried out by the Anonymous hacking group.
Lulz Security, abbreviated as LulzSec, was a black hat computer hacking group that claimed to be responsible for several high profile attacks, including the compromise of user accounts from Sony Pictures in 2011. The group also claimed responsibility for taking the CIA website offline.
One of the founders of LulzSec was identified by Backtrace Security in 2011 in a PDF publication named "Namshub". Hector Xavier Monsegur.
Sabu featured prominently in the group's published IRC chats. The Economist referred to Sabu as one of LulzSec's six core members and their "most expert" hacker.
He later helped law enforcement track down other members of the organization as part of a plea deal. At least four associates of LulzSec were arrested in March 2012 as part of this investigation. British authorities had previously announced the arrests of two teenagers they allege are LulzSec members T-flow and Topiary.
At just after midnight on 26 June 2011, LulzSec released a "50 days of lulz" statement, which they claimed to be their final release, confirming that LulzSec consisted of six members, and that their website is to be shut down. This breaking up of the group was unexpected.
Those who have followed the movement closely have said Sabu's participation in the arrest of Jeremy Hammond and others has had a chilling effect on Anonymous, causing members to lay low and worry if additional informants are lurking among them.
Below are chatroom logs of discussions between the hackers involved in LulzSec.
. /$$ /$$ /$$$$$$ .| $$ | $$ /$$__ $$ .| $$ /$$ /$$| $$ /$$$$$$$$| $$ \__/ /$$$$$$ /$$$$$$$ .| $$ | $$ | $$| $$|____ /$$/| $$$$$$ /$$__ $$ /$$_____/ .| $$ | $$ | $$| $$ /$$$$/ \____ $$| $$$$$$$$| $$ .| $$ | $$ | $$| $$ /$$__/ /$$ \ $$| $$_____/| $$ .| $$$$$$$$| $$$$$$/| $$ /$$$$$$$$| $$$$$$/| $$$$$$$| $$$$$$.$ .|________/ \______/ |__/|________/ \______/ \_______/ \_______/ //Laughing at your security since 2011! __ )| ________________________.------,_ _ _/o|_____/ ,____________.__;__,__,__,__,_Y...:::---===````// #anonymous |==========\ ; ; ; ; ; \__,__\__,_____ --__,-.\ OFF (( #lulzsec `----------|__,__/__,__/__/ )=))~(( '-\ THE \\ #antisec \ ==== \ \\~~\\ \ PIGS \\ `| === | ))~~\\ ```"""=,)) | === | |'---') / ==== / `=====' ´------´
- Log.txt - Tensions Inside The Group 'for the lulz'
At the time of his arrest Xavier was 28-year-old, unemployed and facing a sentence of 124 years in prison.
Xavier served 7 months in prison after his arrest but had been free since then while awaiting sentencing. At his sentencing on May 27, 2014, he was given "time served" for co-operating with the FBI and set free under one year of probation.
Anonymous reacted to Sabu's unmasking and betrayal of LulzSec on Twitter, "#Anonymous is a hydra, cut off one head and we grow two back".
If you would like to learn more about Anonymous, here are some suggested reading materials: We Are Anonymous: Inside the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency
Here is my own copy of "The Many Faces of Anonymous".
Have you got any suggestions for me ? Get in touch!
Thank you for reading my article, Until next time!
Your friendly neighbourhood Hacker. |
Qiantang (now Hangzhou), Chekiang province, China
BiographyLittle is known about Yang Hui other than that he wrote several outstanding mathematical texts. He was a contemporary of both Qin Jiushao and Li Zhi, which we know from the dates on which his texts appeared, showing that he lived towards the end of the Nan (Southern) Sung dynasty. However, both Qin and Li's major works appeared about fifteen years before the first work of Yang. Zhu Shijie was only born about the time Yang Hui's first texts were appearing so his life also overlapped that of Yang.
There is a small amount of information about Yang Hui which he relates in his books. He tells us that he was taught mathematics by Liu I who was a native of Chung-shan, in Kwangtung province, which is south of Chekiang province where Yang Hui was born. Nothing at all is known of Liu I, so this information is less helpful in giving us details of Yang Hui than it might be. Again we know the names of four of Yang's friends who were also interested in mathematics, but again as these men are unknown except for Yang's reference to them. The best guess that historians make is that Yang was a minor Chinese official. Most Chinese scholars of the period were officials, for there were no professional mathematicians, but he could not have held an important post since were he a major official he would appear in records of the dynasty. I [EFR] am less certain about this standard view.
I base my argument on the style and content of Yang's books, for it is clear from these that he was an experienced teacher. More than this, he is writing as a teacher trying to find the most interesting and helpful explanation. Any teacher of mathematics today can identify with what Yang is trying to do here. Of course, this in no way proves that the view of Yang as a minor official is false, indeed he could be an official with responsibility for teaching mathematics, but I suggest that it is more likely that he was an active teacher of mathematics who would have had a group of young students around him.
In 1261 Yang wrote the Xiangjie jiuzhang suanfa (Detailed analysis of the mathematical rules in the Nine Chapters and their reclassifications). He tells us that he had obtained a fine edition of the Nine Chapters on the Mathematical Art which contained notes by Jia Xian on the edition commented on by Liu Hui and later by Li Chunfeng. The notes by Jia Xian have not survived so we know of them only through the references from Yang. What Yang produced was not intended to be a further commentary on the ancient classic but instead he selected 80 of the 246 problems for his discussion. He chose these 80 since he felt that they were representatives of the different techniques which were presented in the Nine Chapters.
Yang's Detailed analysis contained twelve chapters. Nine of the twelve correspond to those of the Nine Chapters but there are three further chapters: one containing geometrical figures, one containing the fundamental methods, and one in which Yang presents a new classification of the problems. Each problem is studied by Yang for three different aspects. Firstly he explains the logic behind the problem, secondly he gives a numerical solution to the problem, and thirdly he shows how the method he has presented can be modified to solve similar problems. For example, if the problem reduced to the solution of a quadratic equation, then Yang would solve it numerically, then show how to solve a general quadratic equation numerically.
Problem 16 in Chapter 7 of the Nine Chapters is as follows:-
Now 1 cubic cun of jade weighs 7 liang, and 1 cubic cun of rock weighs 6 liang. Now there is a cube of side 3 cun consisting of a mixture of jade and rock which weighs 11 jin. Tell: what are the weights of jade and rock in the cube. [Note 1 jin = 16 liang]If there are cubic cun of jade and cubic cun of rock in the cube then
Although Yang has presented a problem straight from the Nine Chapters his method of solution is quite different. What Yang's method essentially reduces to is finding the determinant of the matrix of coefficients of the system of equations. Of course he gets the same answer as the earlier authors and commentators, namely that the cube contains 14 cubic cun of jade weighing 6 jin 2 liang, and 13 cubic cun of rock weighing 4 jin 14 liang.
There is other work in Yang's Detailed analysis that we should single out for a mention. He gives what today is called Pascal's triangle, up to the sixth row, saying that he learnt it from Jia Xian's treatise. Yang also gave formulae for the sum of certain series, for example he found the sum of the squares of the natural numbers from to and showed that
.See for a discussion of the geometrical ideas which lie behind Yang's approach to summing series.
A year after producing Detailed analysis Yang wrote Riyong suanfa (Mathematics for everyday use). Although the text of this has been lost, we know enough about it from quotes in other works to know that it was an elementary text. Yang says that he wrote it:-
... to assist the reader with the numerous matters of daily use and also to instruct the young in observation and practice.In some of the quotes which allow a partial reconstruction of this work are translated into English. In his text Yang explained:-
... the additive method of multiplication and the subtractive method of division [relative to the] ten problems and their solutions.Over the next years Yang must have continued to produce material for mathematics texts but he published nothing more until 1274 when Cheng Chu Tong Bian Ben Mo which means Alpha and omega of variations on multiplication and division appeared. This was a three chapter work, each chapter having its own title. The first chapter is Fundamental changes in calculation, the second is Computational treasure for variations in multiplications and divisions, and the third, written in collaboration with Shih Chung-yung who was one of his friends, is Fundamentals of the applications of mathematics.
In 1275 two further works by Yang appeared; the Practical mathematical rules for surveying and Continuation of ancient mathematical methods for elucidating strange properties of numbers, both being works of two chapters. All Yang's volumes of 1274 and 1275 were assembled into what were essentially his collected works called Yang Hui suanfa (Yang Hui's methods of computation). An English translation of the Yang Hui suanfa appears in . The topics covered by Yang include multiplication, division, root-extraction, quadratic and simultaneous equations, series, computations of areas of a rectangle, a trapezium, a circle, and other figures. He also gives a wonderful account of magic squares and magic circles which we give more information about below.
One of the more remarkable aspects of this work is the document on mathematics education Xi Suan Gang Mu (A syllabus of mathematics) which prefaced the first chapter of Cheng Chu Tong Bian Ben Mo. Man Keung Siu, reviewing , writes that the syllabus:-
... is an important and unusual extant document in mathematics education in ancient China. Not only does it specify the content and the time-table of a comprehensive study program in mathematics, it also explains the rationale behind the design of such a curriculum. It emphasizes a systematic and coherent program that is based on real understanding rather than on rote learning. This program is a marked improvement on the traditional way of learning mathematics by which a student is assigned certain classical texts, to be studied one followed by the other, each for a period of one to two years!The syllabus is a fascinating document for it shows Yang's concern that mathematics is properly taught to those meeting the subject for the first time. This was not the first time Yang had shown such concerns, for his elementary text of 1262 was also clearly designed to help beginners.
Here is a problem taken from Chapter 2 of Continuation of ancient mathematical methods for elucidating strange properties of numbers.
100 coins buy Wenzhou oranges, green oranges, and golden oranges, 100 in total. If a Wenzhou orange costs 7 coins, a green orange 3 coins, and 3 golden oranges cost 1 coin, how many oranges of the three kinds will be bought?Yang's solution is quoted in :-
From 3 times 100 coins subtract 100 coins; from 3 times the cost of a Wenzhou orange i.e. 21, subtract 1; the remainder is 20. From 3 times the cost of a green orange, i.e. 9, subtract 1; the remainder is 8. The sum of the remainder is 28. Divide 200 by 28, we have the integer 6. These are the numbers to be found; 6 Wenzhou oranges and 6 green oranges respectively. And then (200 - 6 × 28) ÷ 8 = 4, this is the difference of the number of Wenzhou oranges and green oranges. Hence the sum of them is 16, whereas the number of gold oranges to be found is 84.What is Yang doing? At first sight it seems to make no sense, so let us look at how we might approach such a problem. Suppose there are Wenzhou oranges, green oranges and golden oranges. Then a modern solution would set up equations
Multiplying the second by 3 and putting it first gives
Now look at Yang's explanation. He is subtracting the second equation from the first: 300 - 100 coins, 21 - 1 Wenzhou oranges, 9 - 1 green oranges. He gets
Then let , say, be the difference of the number of Wenzhou oranges and green oranges, so . Look at Yang's explanation. This is exactly what he is doing! Replace in the above equation so that
.Hence and 100 - (6 + 10) = 84 which is the number of golden oranges.
If you want to try one of Yang's problems, here is another of the same type, being the first problem in Chapter 2:-
A number of pheasants and rabbits are placed together in the same cage. Thirty-five heads and ninety-four feet are counted. Find the number of pheasants and rabbits.Finally let us note Yang's remarkable contribution to magic squares. Firstly it is important to realise that he presents them as a good way to interest people in numbers, and he does not claim any magic properties. We have used the standard term magic square, but Yang does not use the word magic, simply calling them number diagrams. He gives a magic square of order 3, two squares of order 4, two squares of order 5, two squares of order six, two squares of order 7, two of order 8, one of order nine, and one of order 10.
Yang's 3 × 3 square One of Yang's 4 × 4 squares
One of Yang's 5 × 5 squares One of Yang's 6 × 6 squares
One of Yangs 7 × 7 squares is at THIS LINK.
One of Yangs 8 × 8 squares is at THIS LINK.
Yangs 9 × 9 square is at THIS LINK.
Yangs 10 × 10 square is at THIS LINK.
Again Yang does not claim any originality here, and writes as if he is presenting well known facts. This said, no record of the higher order magic squares now exist in the writings of earlier Chinese mathematicians.
As a final arithmetical treat we give Yang's simplest magic circle.
The property to note here is that there are seven intersecting circles in the diagram. Each circle has a central number and four other numbers, in the north, south, east and west positions, on its circumference. Adding the central number and the four numbers on the circumference gives 65 for each of the seven circles.
- H Peng-Yoke, Biography in Dictionary of Scientific Biography (New York 1970-1990).
See THIS LINK.
- L Y Lam, A critical study of the 'Yang Hui suan fa': a thirteenth-century Chinese mathematical treatise (Singapore, 1977).
- Lay Yong Lam, A Critical Study of the Yang Hui Suan Fa: A Thirteenth-Century Chinese Mathematical Treatise, translated from Chinese (1977).
- J-C Martzloff, A history of Chinese mathematics (Berlin-Heidelberg, 1997).
- J-C Martzloff, Histoire des mathématiques chinoises (Paris, 1987).
- Y Mikami, Mathematics in China and Japan (New York, 1961).
- J Needham, Science and Civilisation in China III (Cambridge, 1959).
- K Shen, J N Crossley and A W-C Lun, The nine chapters on the mathematical art : Companion and commentary (Beijing, 1999).
- G C Smith, S Radvansky and M Chiba, History of mathematics and related sciences : an annotated bibliography of sources held by Monash University Library (Clayton, 1992).
- G Abe, Magic squares that occur in Yang-Hui's mathematics (Japanese), Sugakushi Kenkyu 70 (1976), 11-32.
- A Hirayama, The year in which Takakazu Seki copied the 'Yang Hui Suanfa' (Japanese), Sugakushi Kenkyu 68 (1976), 1-2.
- J Needham, Yang Hui and the coming of Euclid, in J Needham, Science and civilisation in China Vol. 3 : Mathematics and the sciences of the heavens and the earth (New York, 1959), Chapter 6.
- A E Raik, The calculation of some volumes in the Old Chinese treatise 'Mathematics in nine books' (Russian), Istor.-Mat. Issled. No. 14 (1961), 467-472.
- M K Siu, Pyramid, pile, and sum of squares, Historia Math. 8 (1) (1981), 61-66.
- K Q Sun, Hui Yang's triangle and determinant (Chinese), Sichuan Shifan Daxue Xuebao Ziran Kexue Ban 17 (4) (1994), 53-57.
- H A Sun, A note on computation of the area of a quadrilateral with different sides (Chinese), J. Liaoning Norm. Univ. Nat. Sci. 25 (3) (2002), 229-232.
- Yen Tun-chieh, A study of the mathematical books written by Yang Hui in the Sung period, in Discourses on the history of mathematics of the Sung and Yuan period (Peking 1966).
- L Y Yong, Yang Hui's commentary on the 'ying nu' chapter of the Chiu chang suan shu, Historia Math. 1 (1) (1974), 47-64.
- L Y Yong, The Jih yung suan fa: an elementary arithmetic textbook of the thirteenth century, Isis 63 (218) (1972), 370-383.
- A P Yushkevich, Studies in the history of mathematics in ancient China (Russian), Voprosy Istor. Estestvoznan. i Tekhn. (3) (1982), 125-136.
- A P Youschkevitch, Nouvelles recherches sur l'histoire des mathématiques chinoises, Rev. Histoire Sci. Appl. 35 (2) (1982), 97-110.
- L Y Yong, Yang Hui's commentary on the 'ying nu' chapter of the 'Chiu chang suan shu', Historia Math. 1 (1) (1974), 47-64.
- D M Zhou, 'A syllabus of mathematics' and Hui Yang's methodology of teaching mathematics (Chinese), J. Central China Normal Univ. Natur. Sci. 24 (3) (1990), 396-399.
Additional Resources (show)
Other pages about Yang Hui:
Other websites about Yang Hui:
Written by J J O'Connor and E F Robertson
Last Update December 2003
Last Update December 2003 |
Sound Velocity Estimation of Seabed Sediment Based on Parametric Array Sonar.
Geoacoustic parameters of submarine sediments are the basis of shallow sea communications, seabed resource detection, and other seabed scientific research studies [1-4]. In the traditional seabed sediment detection, the problem of the coupling between sediment depth and sound velocity cannot be solved . In actual detection, sub-bottom profiler systems generally use empirical sound velocity to calculate the sediment layer thickness. This method will certainly affect the detection accuracy to a certain extent . In order to solve this problem, the method of acoustic parameters acquisition of seabed sediment in the measured area must be effective.
At present, most methods of acoustic parameter inversion are based on the underwater acoustic methods, such as the matching field inversion method [7-9], the seabed reflection loss inversion method [10, 11], and the high resolution local seabed inversion method. Matching field inversion can estimate the geoacoustic parameters in a large area, but it can only reflect the average value of the water and seafloor spatial variation environment, lacking the accuracy of local inversion parameters . The Schock inversion method using the complex Biot model based on the vertical bottom reflection loss can invert the local sound velocity at high resolution, but the inversion method has a complicated calculation process. The high-resolution local seabed inversion method adopts multisound source transmission and vertical array reception in different places to complete the measurement of geoacoustic parameters in the short distance area, but the inversion efficiency is low and the project is more difficult to achieve . The multiangle backscatter method is based on the common point arrangement of the transceiver; this method uses the time-delay information of echoes in different directions to measure the sound velocity of the deposited layer, and the calculation method is simple . The flat seafloor model for inversion of sound velocity based on multiangle seabed backscatter has been well proved in theory and experiments [16-19]; however, this model is sensitive to the inclination of the seabed and cannot be applied to inclined seabed conditions. Therefore, this model cannot currently be used to measure the sound velocity of actual seabed sediments.
In this paper, an inclined seabed model based on the multiangle seabed backscatter signal is proposed. Under this assumption, there is no fluctuation in the seabed terrain within the measurement range and the sedimentary layer is isotropic. Based on the linear acoustics theory, the geometric relationship between the wave propagation paths of the upper and lower sediment surfaces received at different angles is analysed. An equation on the sound velocity of the sedimentary layer is constructed and finally solved by accurate time-delay information acquisition of the echo signals.
2. Descriptions of the Problem
2.1. Inclined Sediment Model. The multiangle backscattering geoacoustic parameter estimation method proposed in this article is a method for calculating seabed acoustic parameters based on phased parametric arrays to receive echo acoustic signals from the upper and lower interfaces of sediment layers at different angles. The geometric schematic diagram of the acoustic parameter estimation for the inclined submarine sedimentary layer using the phased parametric array is shown in Figure 1 (the sedimentary layer is assumed to be one layer). The sedimentary layer is isotropic in a detection area, and the boundary is straight, the oblique angle of the inclined seabed is [[theta].sub.0], and the angle between the bottom of the sedimentary layer and the seabed is [[theta].sub.1]. The measuring point seabed depth h, can be directly measured by the primary frequency sound wave radiated by the parameter array, and the seawater sound velocity [c.sub.0] can also be directly obtained by expert measurement equipment. The thickness and sound velocity of the sediment are [H.sub.i] and [c.sub.s], respectively. The parametric array can transmit primary frequency sound beams and the corresponding difference frequency sound beams in different directions by phased control technology.
2.2. Multiangle Sound Propagation Model. Under the theory of ray acoustics, when sound waves propagate in different media, there will be an acoustic refraction effect. When the phased parametric array radiates a detection signal at angle [[phi].sub.i], the distance [h.sub.i] from the seabed footprint to the surface of the parametric array (as shown in Figure 1) is
[h.sub.i] = [1/2][t.sub.i][c.sub.0] cos ([[alpha].sub.i] + [[theta].sub.0]), (1)
and the distance [d.sub.i] from the footprint on the lower surface of the sediment to the seabed footprint is
[d.sub.i] = 1/2 ([T.sub.i] - [t.sub.i]) [c.sub.s] sin [[beta].sub.i], (2)
where [[alpha].sub.i] = [pi]/2 - [[theta].sub.0] - [[phi].sub.0], i = 1,2,3, ..., N, is the number of sound beams radiated by the parametric array at a certain phased control angle [[phi].sub.0] and N is the total number of beams. [t.sub.i] and [T.sub.i] are the time delay of the echo signal scattered from the upper and lower surface of the sediment, respectively. [[alpha].sub.i] and [[beta].sub.i] are the glancing angle and the refraction angle of the sound beam in the sediment layer, respectively, and they are subject to Snell's law as follows:
cos [[alpha].sub.i]/cos [[beta].sub.i] = [c.sub.0]/[c.sub.s]. (3)
When measuring the oblique angle of the seabed, the sound beams at different phase control angles should be used. According to the triangle sine theorem, there is a geometric relation as
[r.sub.i]/sin [[alpha].sub.j] = [r.sub.j]/sin [[alpha].sub.i], (4)
where [r.sub.i] = [t.sub.i][c.sub.0]/2 is the sound path from the parametric array to the upper surface of the sediment, i, j = 1, 2, 3, ... and i [not equal to] j.
The oblique angle of the upper surface of the sediment (seabed) is
tan([[theta].sub.0] + [[phi].sub.j]) = cot([[phi].sub.i] - [[phi].sub.j]) [T.sub.j] - [t.sub.j]/[T.sub.i] - [t.sub.i] 1/sin ([[phi].sub.i] - [[phi].sub.j]). (5)
According to the geometric relationship, the inclined angle [[theta].sub.1] between the upper surface and the lower surface of the sediment can be calculated by
tan [[theta].sub.1] = [d.sub.0] - [d.sub.i]/[d.sub.i] cot [[beta].sub.i] + [h.sub.i] (cos [[alpha].sub.i]/cos [[theta].sub.0]), (6)
where [d.sub.0] is the distance from the position of the lower surface to the upper surface when the sound beam enters the seabed vertically.
Introducing equations (1) and (2) into equation (6), we can obtain
[mathematical expression not reproducible]. (7)
3. Sound Velocity Solution
It can be seen from equations (1)-(2) and (7) that the accurate thickness of the sediment is related to the sound velocity between the layers of sediment. Therefore, when the actual sound velocity in the sediment is obtained, the stratum structure of the sediment can be accurately detected. By changing the form of equation (7), the binary equations for calculating the sediment sound velocity [c.sub.s] and the angle [[theta].sub.1] using different directions can be obtained as
[mathematical expression not reproducible]. (8)
In this paper, the least squares problem of equation (8) is solved using the Gauss-Newton iteration algorithm . Introducing equation (3) into equation (8), the equation (8) can be constructed as
[mathematical expression not reproducible], (9)
where [mathematical expression not reproducible].
The first-order Taylor expansion of the binary function [f.sub.j] ([x.sub.1], [x.sub.2]) at ([x.sup.0.sub.1], [x.sup.0.sub.2]) is
[mathematical expression not reproducible], (10)
[mathematical expression not reproducible]. (11)
Expressing function arguments in vector form [x.sup.0] = ([x.sup.0.sub.1], [x.sup.0.sub.2]), equation (10) can be transformed into the following vector form:
f([x.sup.0]) + J([x.sup.0])(x - [x.sup.0]) = 0, (12)
[mathematical expression not reproducible]. (13)
When n > 2, equation (12) is an overdetermined equation, and its least squares solution is
[x.sup.1] = [x.sup.0] - [J.sup.-1] ([x.sup.0])f([x.sup.0]), (14)
where [J.sup.-1] is the generalized inverse matrix, [x.sup.0] is the initial value of the equation iteration, and the solution of the equation obtained by multiple iterations is
[x.sup.k+1] = [x.sup.k] - [J.sup.-1]([x.sub.k])f([x.sub.k]). (15)
4. Numerical Calculation and Discussion
The depth of the seabed and the thickness of the sedimentary layer below the parametric array are h = 3 and H = 4, and the sound velocities of the seawater and the sedimentary layer are c0 = 1500 and cs = 2000, respectively. The parametric array radiates a series of phase-controlled sound beams from -20[degrees] to 20[degrees] in 0.5[degrees] steps to complete the scanning of the seabed. By using the inclined seabed model, the direction of the sound beam and echo-delay information can be obtained directly. The angle of inclination [theta].sub.0] of the seabed is estimated by the measurement in equation (5). Therefore, the echo delays of the vertical detection beam on the upper and lower surfaces of the sediments [t.sub.i] and [T.sub.i] can be determined from a series of detection beams according to the current seabed tilt angle. Equation (15) is used to solve the sound velocity of the sediment layer. The initial iteration condition is [x.sup.0] = (1700/ 1500, 0).
4.1. Model Performance Analysis. In order to analyse the accuracy of sound speed inversion under the inclined seabed model and the flat seabed model, the sound speed inversion results calculated based on multiangle echo signals under different conditions were analysed. As shown in Figure 2(a), the calculation results show that the sound velocity is prone to large-scale drift when using the flat seabed model, and especially, the results change dramatically according to the seafloor inclined angle and the incident angle of sound beam. As a comparison, under the same conditions, using the method proposed in this paper, the results of sound velocity inversion are given in Figure 2(b). The results show that compared with the traditional flat seabed model, this inclined seabed model shows good adaptability to the more complex actual seabed conditions, such as the changed sediment thickness and seabed with different oblique angles.
The sound velocity in the sediment and the angle between the upper and lower surface of the sediment are calculated by the Gauss-Newton iterative method under different conditions. In order to analyse the convergence rate and accuracy of Equation (15), the relationship between sound speed iteration and the convergence rate and the relationship between angle iteration and the convergence rate are shown in Figure 3. The calculation results show that after five iterations, the results are close to the true values. The effectiveness of this method is verified.
4.2. Sediment Thickness Correction. Due to the refraction effect of sound rays, the propagation path of sound in the sediment does not correspond with the depth of the sediment. As shown in Figure 1, when the sound beam arrived at the upper sediment layer that corresponds to the sea bottom, the propagation path and the thickness of the sediment conformed to the sine theorem. The thickness of the deposited layer at the seabed footprint is
[H.sub.i] = 1/2 cos ([[theta].sub.0] - [[theta].sub.1])sin ([[beta].sub.i] + [[theta].sub.1] ([T.sub.i] - [t.sub.i]) [c.sub.s]. (16)
Figure 4 shows the sub-bottom profiles detected by the phased parametric array by radiating sound beams from -20[degrees] to 20[degrees] in 0.5[degrees] steps scanning the seabed. The correction results of equations (1) and (16) are shown in Figure 4(b), which solves the problem that the contour of the sediment layer is gradually distorted with the increase of angle and improves the accuracy of sub-bottom profiling.
An inclined seabed model based on the method of multiangle seabed backscattering signals detected by the phased parametric array is used in sediment sound velocity inversion. Compared with the existing flat seabed model, this model greatly improves the possibility of the acoustic parameters inversion of the actual seabed. This work provides a base work for further real-time inversion of submarine geoacoustic parameters based on the multiangle backscattering signal processing method. In the process of actual shallow stratum profile detection, the accuracy of the sediment profile detected by the phased parametric array sonar system can be corrected by the angle and real-time inversion sound velocity information. Thereby, the model and the method proposed in this article can greatly improve the detection efficiency.
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare no conflicts of interest.
This work was supported by the NSFC-Zhejiang Joint Fund for the Integration of Industrialization and Informatization (No. U1809212), NSFC-Shandong Joint Fund for Marine Science Research Centers (No. U1906218), the National Natural Science Foundation of China (Nos. 41306182, 41327004, 41606115, and 61401112).
Y.-Y. Zhang, Y. Zhang, G.-L. Hou, and J. C. Sun, "Acoustic intensity flux in low frequency acoustic field of shallow water and its application research," in Proceedings of the 2012 2nd International Conference on Computer Science and Network Technology, pp. 1914-1917, Changchun, China, December 2012.
Y. Zhou and F. Tong, "Research and development of a highly reconfigurable OFDM MODEM for shallow water acoustic communication," IEEE Access, vol. 7, pp. 123569-123582, 2019.
S. Yoshizawa, H. Tanimoto, and T. Saito, "Experimental results of OFDM rake reception for shallow water acoustic communication," in Proceedings of the 2016 Techno-Ocean (Techno-Ocean), pp. 185-188, Kobe, Japan, October 2016.
Y. Cheng, A. Zhao, J. Hui, T. An, and B. Zhou, "Parametric underwater transmission based on pattern time delay shift coding system," Mathematical Problems in Engineering, vol. 2018, Article ID 8249245, 7 pages, 2018.
G. Theuillon, Y. Stephan, and A. Pacault, "High-resolution geoacoustic characterization of the seafloor using a subbottom profiler in the gulf of lion," IEEE Journal of Oceanic Engineering, vol. 33, no. 3, pp. 240-254, 2008.
M. Gutowski, J. Malgorn, and M. Vardy, "3D sub-bottom profiling--high resolution 3D imaging of shallow subsurface structures and buried objects," in Proceedings of the Oceans 2015--Genova, pp. 1-7, Genoa, Italy, May 2015.
D. Wang, L. Zhang, C. Bao, S. Ma, and Y. Wang, "A comparative study for shallow water match-field inversion using surrogate models," in Proceedings of the Oceans 2019--Marseille, pp. 1-6, Marseille, France, June 2019.
S. E. Dosso and M. J. Wilmut, "Data uncertainty estimation in matched-field geoacoustic inversion," IEEE Journal of Oceanic Engineering, vol. 31, no. 2, pp. 470-479, 2006.
X. Zhu, Z. Wang, and H. Ge, "Matched field processing for a short vertical array in shallow water using the geoacoustic parameters inversed from short range source data," in Proceedings of the 2016IEEE/OES China Ocean Acoustics (COA), pp. 1-5, Harbin, China, January 2016.
L. Y. S. Chiu, A. Y. Y. Chang, H. Chen, C. Wang, and J. Y. Lou, "Error analysis on normal incidence reflectivity measurement and geoacoustic inversion of ocean surficial sediment," in Proceedings of the 2019 IEEE Underwater Technology (UT), pp. 1-10, Kaohsiung, Taiwan, April 2019.
S. G. Schock, "Remote estimates of physical and acoustic sediment properties in the South China Sea using chirp sonar data and the biot model," IEEE Journal of Oceanic Engineering, vol. 29, no. 4, pp. 1218-1230, 2004.
Y. Kunde, M. Yuanliang, S. Chao, J. H. Miller, and G. R. Potty, "Multistep matched-field inversion for broad-band data from ASIAEX2001," IEEE Journal of Oceanic Engineering, vol. 29, no. 4, pp. 964-972, 2004.
S. G. Schock, "A method for estimating the physical and acoustic properties of the sea bed using chirp sonar data," IEEE Journal of Oceanic Engineering, vol. 29, no. 4, pp. 1200-1217, 2004.
M. R. Fallat, P. L. Nielsen, S. E. Dosso, and M. Siderius, "Geoacoustic characterization of a range-dependent ocean environment using towed array data," IEEE Journal of Oceanic Engineering, vol. 30, no. 1, pp. 198-206, 2005.
T. Zhou, H. Li, J. Zhu, and Y. Wei, "A geoacoustic estimation scheme based on bottom backscatter signals from multiple angles," Acta Physica Sinica, vol. 8, no. 8, pp. 208-214, 2014.
H. Li, J. Ma, J. Zhu, and B. Chen, "Numerical and experimental studies on inclined incidence parametric sound propagation," Shock and Vibration, vol. 2019, Article ID 2984191, 10 pages, 2019.
L. Wan, X. Kong, and F. Xia, "Joint range-Doppler-angle estimation for intelligent tracking of moving aerial targets," IEEE Internet of Things Journal, vol. 5, no. 3, pp. 1625-1636, 2018.
F. Wen, Z. Zhang, K. Wang, G. Sheng, and G. Zhang, "Angle estimation and mutual coupling self-calibration for ULA-based bistatic MIMO radar," Signal Processing, vol. 144, pp. 61-67, 2018.
X. Wang, L. Wang, X. Li, and G. Bi, "Nuclear norm minimization framework for DOA estimation in MIMO radar," Signal Processing, vol. 135, pp. 147-152, 2017.
L. Zhang and T. Zhang, "A robust calibration method for the underwater transponder position based on Gauss-Newton iteration algorithm," in Proceedings of the 2019 7th International Conference on Control, Mechatronics and Automation (ICCMA), pp. 448-453, Delft, Netherlands, November 2019.
Jingxin Ma [ID], (1,2,3) Haisen Li, (1,2,3) Jianjun Zhu, [ID] (1,2,3) and Baowei Chen (1,2,3)
(1) Acoustic Science and Technology Laboratory, Harbin Engineering University, Harbin 150001, China
(2) Key Laboratory of Marine Information Acquisition and Security (Harbin Engineering University), Ministry of Industry and Information Technology, Harbin 150001, China
(3) College of Underwater Acoustic Engineering, Harbin Engineering University, Harbin 150001, China
Correspondence should be addressed to Jianjun Zhu; firstname.lastname@example.org
Received 5 July 2020; Revised 24 July 2020; Accepted 27 July 2020; Published 14 August 2020
Academic Editor: Ivan Giorgio
Caption: Figure 1: Geometric diagram of the sound propagation path under inclined seabed condition.
Caption: Figure 2: Sediment velocity estimation under different seabed conditions: (a) flat seabed model and (b) inclined seabed model.
Caption: Figure 3: Convergence rate analysis of Gauss-Newton iteration method under different seabed conditions (simulation conditions: [[theta].sub.0] = 10[degrees]): (a) iterative convergence of sound speed and (b) iterative convergence of [[theta].sub.1].
Caption: Figure 4: Phased control sound beams detection of sediment profiles (simulation conditions [[theta].sub.0] = 10[degrees], [[theta].sub.1] = 5[degrees], [c.sub.s] = 2000, h = 3 m, and H = 4 m): (a) before correction and (b) after correction.
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||Research Article|
|Author:||Ma, Jingxin; Li, Haisen; Zhu, Jianjun; Chen, Baowei|
|Publication:||Mathematical Problems in Engineering|
|Date:||Aug 31, 2020|
|Previous Article:||Detection for Multisatellite Downlink Signal Based on Generative Adversarial Neural Network.|
|Next Article:||New Tseng-Degree Gradient Method in Variational Inequality Problem.| |
You must have noticed that while online shopping, a set of items related to the item you are viewing or ordering are shown with tags like “Frequently bought together”. So how do they figure out which items are bought together? Association rules attempt to answer these kinds of questions.
In basic terms, association rules present relations between items. They are statements that help to discover relationships between data in a database. An association rule can be defined as an implication of the form A → B. Here ‘A’ is called premise, which represents a condition that must be true for ‘B’ to hold. ‘B’ is a conclusion that happens when ‘A’ is true. ‘A’ is called antecedent and ‘B’ is called consequent. An antecedent is an element found in data whereas a consequent is found in combination with the antecedent. The rule A → B can be interpreted as: If A happens, B happens. This is a very general interpretation; the accurate interpretation depends on the domain.
Let us try to understand this concept better with the help of a few examples. Suppose you are the owner of a stationary shop. Now you get a rule: Book → Pen. This can be interpreted as: If a customer bought a book, he will also buy a pen. Let us look at another simple example. Suppose you are analysing an online streaming website of TV series and see a rule Friends → How I met your mother. It means that people who have watched ‘Friends’ have also watched ‘How I met your mother’.
Now let us look at how association rules help. The interpretations of association rules can help to improve or build a system. Taking the stationary shop example again. We know that a customer is likely to buy a pen if he buys a book, so we move books and pens together so that the sale improves. Similarly, taking the example of streaming website, if we keep the titles ‘Friends’ and ‘How I met your mother’ together, the viewership will increase.
But when you are using the association rules to take an action, you have to be careful about the situation. For example, suppose you are the owner of a supermarket shop and you find the rule: Beer → Diapers. This a very unexpected association and as the owner, you should not move these products together on the shelves.
For applying association rules, we need data to be in the form of transactions. Transaction means a logical group of items in this context. These transactions are unrelated to database transactions. The transaction can be a group of grocery items, a list of movies, etc.
Association rules in Data Science
In data mining, the interpretation of association rules simply depends on what you are mining. Let us have an example to understand how association rule help in data mining. We will use the typical market basket analysis example. In this example, a transaction would mean the contents of a basket. Each transaction, therefore, gives us the contents of a basket of a customer. The table below is the data.
We have data of four customers and the contents of their market basket. Looking at the data, we notice that transactions 1, 2 and 4 contain bread and milk. This can be converted as out first association rule, that is, Bread → Milk. In a similar way of thinking, we can find out that milk and butter are present in transactions 1, 2, and 3. This leads to another association rule, Milk → Butter.
However, there is a drawback. We have not figured out one thing – which rule is better? It is impossible to compare them. To overcome this problem, we can use several classifiers. These classifiers will show the strength of the rule. They are also called as ‘interestingness measures’, because the strength of the rule is comparable with its interestingness.
In case of association rules, there are two classical measures – support and confidence. Support gives a measure of how often did the rule apply. Confidence is a percentage of all transaction, which contain items on the left and on the right side of the rule.
Let us look at an example to have a better understanding of support and confidence. We will consider the classical market basket case again.
Let us try to analyse the data in the table above. Bread and mayo are both in the baskets of transactions 1, 2 and 6. The total number of transactions is six. So, the support for the rules ‘Bread→Mayo’ and ‘Mayo→Bread’ will be 3/6 = 50%. The support is calculated as the number of transactions where both items are present; that is, 3 divided by the total number of transactions which is 6. For calculating confidence, we have to look at the number of transactions where mayo and bread were present in the basket.
Now consider only ‘Bread → Mayo’ rule. Bread is present in baskets 1, 2, 3, 4 and 6. So 5 baskets. Out of these, mayo is present in baskets 1, 2 and 6. So the confidence of the rule ‘Bread → Mayo’ is 3/5 = 60%.
So, we can now calculate the support and confidence of association rules. These can be easily used to filter out uninteresting rules by setting thresholds, such as minimum support should be 50% and minimum confidence should be 75% to consider the rule as interesting. Setting the values of these measures will determine the number of rules that will be interesting. So, if the values are too low, there might be too many rules classified as interesting and vice-versa. So appropriate values should be assigned.
In conclusion, we can say that there are various control parameters in data mining using association rules which can be used to get the desired output depending on the case. The most important question is what you are mining. Apart from support and confidence, many other interestingness measures are there for data mining using association rules that can be used and that may work better in specific cases. Data mining using association rules has applications in web usage mining, market basket analysis, bioinformatics, healthcare, continuous flow process, etc. and therefore is an interesting emerging concept that can help improve efficiency. |
UEN Security Office
Technical Services Support Center (TSSC)
Eccles Broadcast Center
101 Wasatch Drive
Salt Lake City, UT 84112
(801) 585-6105 (fax)
Find equivalent forms for positive rational numbers. Divide numerator by denominator of a fraction to find a decimal.
Enduring Understanding (Big Ideas):
Equivalency Rational numbers
Convert positive rational numbers to fraction, decimal or percent form.
Convert, equivalent number forms, repeating decimal, terminating decimal.
Ways to Gain/Maintain Attention (Primacy):
Technology, Journaling (Foldable), Sketching, Cooperative group discussion, game.
Starter: Sketch each of the following and tell where you might use that rational number form.
Lesson Segment 1: How can I represent equivalent forms for decimals, fractions and percents?
Put the Representing Equivalent Rational Numbers, and Fractions, Decimals, and Percents With Candy on transparencies, so you can discuss with the class.
As a class discuss and work to complete Representing Equivalent Rational Numbers
Apply: Have students work to complete Fractions, Decimals, and Percents With Candy one question at a time. Use a Board Talk protocol.
Board Talk Protocol
Students discuss a problem with team members or a partner without writing anything on their papers.
Two or three students are randomly selected to come to the board to individually sketch and show reasoning for the first problem. The students work in separate spaces on the board, so the seated class members will be able to see and compare separate responses.
While the three students are working at the board, the remaining students work in their seats to complete the first item on their individual papers. Teacher selects a student at the board to explain to the class what they have done. The class is told they must each write one GOOD QUESTION about the explanation the student at the board is giving. A good question starts with how, why, what if, or can you clarify Write these GOOD QUESTION starters on the board. Students must write their good question on their assignment paper as the student is explaining.
After the explaining student finishes, the teacher selects one or two from the class to ask their GOOD QUESTION to the explaining student.
The teacher may select a second or third student at the board to then explain their approach, especially if they have a different response. The seated students again write a GOOD QUESTION for that explaining student. Or, the teacher may ask the class members to look at all responses on the board and prepare to describe how they are similar or different.
We know from our last lesson that there are times when one form for a rational number is better than another. For example, we wouldnt want to use the percent form for ½ in a recipe, and we wouldnt want to use the fraction form for $1.35 at the store.
Lesson Segment 2: How can a rational number be converted to a different form? How does the fraction a/b relate to a divided by b?
Q. Why would we want to be able to convert from one rational number form to another?
There are many procedures for converting rational numbers. One of these is to use the decimal form for a rational number as the Middle Man. That is, that percents and fractions are first converted to the decimal form, and then can be converted to another form.
Sketch this graphic on the board. The idea here is that fractions can be converted to decimals by dividing denominator into numerator, and percents can be converted to a decimal by moving the decimal two places to the left.
Ask students if they have ever had to go through a middle man. For example, when I was younger, I always went to my mother to ask her to get something I wanted from my father because she was easier to work with.
Fraction to Decimal: Demonstrate using the TI-73 to write a fraction as a decimal by dividing the denominator into the numerator. Use common fractions such as ½, ⅓, ⅔, ⅛, ⅝, ¼, ¾, and the fifths. Discuss the repeating decimals for ⅓ and for ⅔ pointing out that the calculator rounds the last digit for ⅔.
Percent to Decimal: Show students how to use the calculator to divide any number written in percent form by 100 to get a decimal.
Once the number is written in decimal form, we can use the Decimal Conversion Procedures:
Decimal to fraction: Write decimal as a fraction using, 10ths, 100ths, 100ths, as the denominator depending on the last place value of the number
Decimal to percent: Move decimal to the right two places.
Help students make a Three-Flap Foldable for their journal that looks like this. Clip on the dotted lines up to the fold line.
Write the decimal conversion procedure under the center flap. Write the procedures for converting fractions and percents to decimal form under the two designated flaps.
Sing the Equivalent Forms of Rational Numbers Song with them (attached)
Another way to finding equivalent rational numbers is to use the feature on the TI-73.Show the students how to use the key on the TI-73.
Fraction to Decimal and Decimal to Fraction: Type number then push and .
Percent to Fraction: Type the number then push
Fraction to percent: Type in the fraction then push 100.
Percent to Decimal: Type the number then push
Having more than one strategy for finding equivalent rational numbers will be helpful.
Lesson Segment 3: Practice Game:
Play Converting Rational Numbers Concentration (attached). Put the game on a transparency. Cover the squares with little post-its. Divide the class into two teams and have them guess to find a matching pair - two equivalent numbers in different forms.
Assign any text practice as needed.
Observation, student performance tasks.
This lesson plan was created by Linda Bolin. |
What Do The Error Bars On The Points Represent
However, one common thread amongst the responses was a general uncertainty about uncertainty. The 95% confidence interval in experiment B includes zero, so the P value must be greater than 0.05, and you can conclude that the difference is not statistically significant. This post hopes to answer some of those questions** A few weeks back I posted a short diatribe on the merits and pitfalls of including your uncertainty, or error, in any Sci.
What Do Error Bars Show
bars just touch, P = 0.17 (Fig. 1a). You can help Wikipedia by expanding it. This statistics-related article is a stub. As for choosing between these two, I've got a personal preference for confidence intervals as it seems like they're the most flexible and require less assumptions than the standard error.
They can also be used to draw attention to very large or small population spreads. bars do not overlap, the difference between the values is statistically significant” is incorrect. One choice is whether to include a trendline or to perform a true curve fit. Error Bars In Excel E2 difference for each culture (or animal) in the group, then graphing the single mean of those differences, with error bars that are the SE or 95% CI calculated from those
As such, I'm going to say that the closest thing I've got to the true distribution of all the data is the sample that I've already got. How To Calculate Error Bars Full size image (53 KB) Figures index Next The first step in avoiding misinterpretation is to be clear about which measure of uncertainty is being represented by the error bar. Representation with error bars It is standard practice to report error when preparing figures that represent uncertain quantities. However, the safest thing is to state exactly what you are reporting.
By chance, two of the intervals (red) do not capture the mean. (b) Relationship between s.e.m. How To Draw Error Bars However, if n = 3, you need to multiply the SE bars by 4.Rule 5: 95% CIs capture μ on 95% of occasions, so you can be 95% confident your interval SD is calculated by the formulawhere X refers to the individual data points, M is the mean, and Σ (sigma) means add to find the sum, for all the n data Trendlines When one sketches a line that connects individual data points, or fits a curve to data by visual inspection alone, one has produced a trendline.
How To Calculate Error Bars
The bars on the left of each column show range, and the bars ...Descriptive error bars can also be used to see whether a single result fits within the normal range. http://www.ruf.rice.edu/~bioslabs/tools/data_analysis/errors_curvefits.html All the figures can be reproduced using the spreadsheet available in Supplementary Table 1, with which you can explore the relationship between error bar size, gap and P value. What Do Error Bars Show At each data point, display vertical error bars that are equal in length.x = 1:10:100; y = [20 30 45 40 60 65 80 75 95 90]; err = 8*ones(size(y)); errorbar(x,y,err) Overlapping Error Bars The error bars show 95% confidence intervals for those differences. (Note that we are not comparing experiment A with experiment B, but rather are asking whether each experiment shows convincing evidence
If you do not want to draw the lower part of the error bar at any data point, then set neg to an empty array. http://permanentfatalerror.com/error-bars/what-do-standard-error-bars-represent.php Let's take, for example, the impact energy absorbed by a metal at various temperatures. Let's look at two contrasting examples. Why is this? How To Interpret Error Bars
When random error is unpredictable enough and/or large enough in magnitude to obscure the relationship, then it may be appropriate to carry out replicate sampling and represent error in the figure. The trendline in figure 3 was positioned so that the same number of data points fall significantly above the line as below the line. Discover... navigate here Although it would be possible to assay the plate and determine the means and errors of the replicate wells, the errors would reflect the accuracy of pipetting, not the reproduciblity of
By dividing the standard deviation by the square root of N, the standard error grows smaller as the number of measurements (N) grows larger. Error Bars Standard Deviation Or Standard Error You might also be interested in our tutorial on using figures (Graphs). Note that the confidence interval for the difference between the two means is computed very differently for the two tests.
Here, SE bars are shown on two separate means, for control results C and experimental results E, when n is 3 (left) or n is 10 or more (right). “Gap” refers
bars, error bars based on the s.e.m. If we wanted to calculate the variability in the means, then we'd have to repeat this process a bunch of times, calculating the group means each time. All rights reserved. How To Calculate Error Bars By Hand Lo, N.
When you view data in a publication or presentation, you may be tempted to draw conclusions about the statistical significance of differences between group means by looking at whether the error The principle might be easier to see when described visually. One is with the standard deviation of a single measurement (often just called the standard deviation) and the other is with the standard deviation of the mean, often called the standard http://permanentfatalerror.com/error-bars/what-do-error-bars-represent-on-a-bar-graph.php Williams, and F.
Since what we are representing the means in our graph, the standard error is the appropriate measurement to use to calculate the error bars. Cell. When s.e.m. Click the button below to return to the English verison of the page.
What... 1. For such comparisons the results of a statistical analysis such as "Student's" t test or an analysis of variance might be illustrated in the figure itself or placed in the caption Ok, so this is the raw data we've collected. You may also encounter the terms standard error or standard error of the mean, both of which usually denote the standard deviation of the mean.
Additional data Editors' pick Visit the collection Science jobs NatureJobs.com Associate or Senior Editor,Nature Geoscience (SF Application #:12145) Springer Nature Associate or Senior Editor,Nature Chemical Biology (SF Application #: 12141) Springer Vary the lengths of the error bars.x = 1:10:100; y = [20 30 45 40 60 65 80 75 95 90]; err = [5 8 2 9 3 3 8 3 The leftmost error bars show SD, the same in each case. Determination of a best fit line by the method of least squares Error bars are shown in figure 4 but they were not involved in the analysis.
Specify the values in data units. Examples are based on sample means of 0 and 1 (n = 10). Means and SE bars are shown for an experiment where the number of cells in three independent clonal experimental cell cultures (E) and three independent clonal control cell cultures (C) was However, the converse is not true--you may or may not have statistical significance when the 95% confidence intervals overlap.
Plot y versus x. Look at the equation for the standard error. bars are separated by about 1s.e.m, whereas 95% CI bars are more generous and can overlap by as much as 50% and still indicate a significant difference. The idea is to demonstrate the extent to which random error influenced the reliability of the data.
Back to English × Translate This Page Select Language Bulgarian Catalan Chinese Simplified Chinese Traditional Czech Danish Dutch English Estonian Finnish French German Greek Haitian Creole Hindi Hmong Daw Hungarian Indonesian For the n = 3 case, SE = 12.0/√3 = 6.93, and this is the length of each arm of the SE bars shown.Figure 4.Inferential error bars.
© Copyright 2017 permanentfatalerror.com. All rights reserved. |
- Open Access
Fixed point property and approximation of a class of nonexpansive mappings
Fixed Point Theory and Applications volume 2014, Article number: 81 (2014)
We introduce the concept of ψ-firmly nonexpansive mapping, which includes a firmly nonexpansive mapping as a special case in a uniformly convex Banach space. It is shown that every bounded closed convex subset of a reflexive Banach space has the fixed point property for ψ-firmly nonexpansive mappings, an important subclass of nonexpansive mappings. Furthermore, Picard iteration of this class of mappings weakly converges to a fixed point.
MSC:47H06, 47J05, 47J25, 47H10, 47H17.
Throughout this paper, a Banach space E will be over the real scalar field. We denote its norm by and its dual space by . Let , the set of all fixed points for a mapping T and ℕ denote the set of all positive integer.
Let K be a nonempty bounded closed convex subset of E. We say that K has the fixed point property for nonexpansive mapping if for every nonexpansive mapping (i.e. ), K contains a fixed point of T (i.e. ); E has the fixed point property (FPP for short) if any nonempty bounded closed convex subset of E has the fixed point property for nonexpansive mapping; E has the weak fixed point property (WFPP for short) if any weakly compact convex subset of E has the fixed point property for nonexpansive mapping. For a reflexive Banach space, both properties are obviously the same.
The famous question whether a Banach space has the fixed point property (WFPP) had remained open for a long time [1, 2]. It has been answered in the negative by Sadovski and Alspach who constructed the following examples, respectively.
Let and . Define by
The above two examples suggest that to obtain positive results in the problem of the existence of fixed points for nonexpansive mappings, it is necessary to impose some restrictions either on T or on the Banach space E. Naturally, the following questions are asked also.
Problem 1.3 Which Banach spaces satisfy the WFPP?
Problem 1.4 Determine a subclass of nonexpansive mappings such that every Banach space satisfies the FPP for this subclass.
Considerable effort in the development of a fixed point theory for nonexpansive mappings, mainly for Problem 1.3, has been done in the last 40 years. A well-known result of Browder asserts that if E is uniformly convex, then E has the weak fixed point property. This theorem was also proved independently by Göhde . At the same time, Kirk established a more general result by showing that if E has normal structure, then E has the weak fixed point property. Normal structure is a geometric property somewhat more general than uniform convexity. In , one can see a detailed study of sufficient conditions for this property as well as their permanence properties. It has also been shown that a condition weaker than normal structure is sufficient to guarantee the weak fixed point property (WFPP). In 1981, it has been showed by Maurey that the Hardy space and the reflexive subspace of have the weak fixed point property (WFPP). For other examples of Banach spaces with the weak fixed point property see [6, 7, 14] and [15, 16] for more details.
One of our main aims is to give an affirmative answer to Problem 1.4. In other words, we will study fixed point properties of ψ-firmly nonexpansive mapping, an important subclass of nonexpansive mappings, on weakly compact convex subsets of a Banach space.
On the other hand, using the Picard iterative method, the well-known Banach Contraction Principle is obtained: Let be a complete metric space and is a contraction (i.e. , and some ). Then T has a unique fixed point and for each , Picard iteration strongly converges to .
It is known for some time that even in a Hilbert space setting, Picard iteration of a nonexpansive mapping T need not actually converge to a fixed point. However, for some special nonexpansive mapping (or some nonexpansive mapping who is modified necessarily), the weak convergence of such iteration can be proved. Fox example, in the frame of a uniformly convex Banach space E with a Fréchet differentiable norm, Reich showed that if where I is an identity operator and T is a nonexpansive self-mapping defined on a nonempty bounded closed convex subset K of E, then for each , Picard iteration weakly converges to a fixed point of T; Bruck [20, 21] proved that for each , the Cesàro means of the nonexpansive self-mapping T weakly converges to a fixed point of T. This fact was first established by Baillon for the spaces (). Naturally, the following question is put forward.
Problem 1.5 Does there exist a subclass of nonexpansive mappings (not contraction) such that Picard iteration (weakly) converges to a fixed point of such mapping?
Another purpose of ours is to show that Picard iteration of ψ-firmly nonexpansive mapping weakly converges to its fixed point. That is, ψ-firmly nonexpansive mapping is actually an answer to Problem 1.5.
We also show that in a uniformly convex space, ψ-firmly nonexpansive mapping includes a firmly nonexpansive mapping and the resolvent of an accretive operator as a special case.
2 Fixed point property for ψ-firmly nonexpansive mapping
The concept of firmly nonexpansive mapping was introduced by Bruck . A mapping T with domain and range in Banach space E is said to be firmly nonexpansive if for all the function
is non-increasing on , or equivalently,
Obviously any firmly nonexpansive mapping is nonexpansive mapping (). The converse is not true (consider the mapping in E). However, there is an interesting observation. For any and consider the by . Banach Contraction Principle guarantees that has a unique fixed point , i.e.,
Since depends on u and t, we can define a family mappings . It is minor technicality to prove that all mappings are firmly nonexpansive. Moreover, for any , (see [, pp.120-122] for a proof). This shows that the fixed point property for firmly nonexpansive mapping coincides with the fixed point property for nonexpansive mapping.
In a Hilbert space, T is firmly nonexpansive if and only if
(see [, pp.127-128] for a proof). Clearly, the inequality (2.2) is equivalent to the following:
In view of the above one might expect firmly nonexpansive mappings to exhibit better behavior than nonexpansive mappings in general. However, from the point of view of fixed point theory, the restriction is mild. Naturally, Song and Chai introduced the notion of firmly type nonexpansive mapping.
A mapping T is said to be firmly type nonexpansive if for all , there exists such that
Obviously, the firmly type nonexpansive mapping contains the firmly nonexpansive mapping and the resolvent of monotone operator as a special case in Hilbert space. For a detailed proof and more examples, see [, Examples 1-5].
Now we introduce the concept of ψ-firmly nonexpansive mapping which includes the firmly type nonexpansive mapping as a special case ().
A mapping T is said to be ψ-firmly nonexpansive if for all , if there exists a continuous strictly increasing function with such that
In order to achieving the objects mentioned in Section 1, solving Problem 1.4, we need the following fact.
Lemma 2.1 ([, Propositions 9.3.6])
Let C be a weakly compact subset in Banach space E and let be a weakly lower semi-continuous function. Then the function f attains its minimum on C. That is,
Now we show our main results.
Theorem 2.2 Let K be a weakly compact convex subsets of a Banach space E and be a ψ-firmly nonexpansive mapping. Then T has a fixed point, i.e. .
Proof Since K is bounded and convex, it is well known (even for nonexpansive mappings) that there exists a sequence in K such that
Let a real valued function φ be defined on K by
Then φ is convex and continuous, and hence weakly lower semi-continuous (see [, p.12, Proposition 5]). It follows from Lemma 2.1 that there exists such that
Next, we show that . It is immediate from (2.5) that
Thus, we have
Therefore together with (2.6), we have
and so by the property of ψ. This yields the desired conclusion. □
Obviously, we also have the following.
Theorem 2.3 Let K be a nonempty bounded closed convex subset of a reflexive Banach space E and be a ψ-firmly nonexpansive mapping. Then T has a fixed point, i.e. .
Now we show that the firmly nonexpansive mapping is a subclass of ψ-firmly nonexpansive mapping in uniformly convex Banach space.
Lemma 2.4 (Xu [, Theorem 2])
Let and be two fixed real numbers. Then a Banach space is uniformly convex if and only if there exists a continuous strictly increasing convex function with such that
for all and , where .
Theorem 2.5 Let K be a nonempty bounded closed convex subset of a uniformly convex Banach space E and be a firmly nonexpansive mapping. Then T is a ψ-firmly nonexpansive mapping.
Proof Since T is firmly nonexpansive, by (2.1), we have
By Lemma 2.4 (, ), we obtain
Let for all . The desired result is reached. □
Let be an accretive operator. Let , the resolvent of A. It is well known that is nonexpansive, where is range of and I is an identity operator of E. Furthermore, for and and ,
which is referred to as the Resolvent Identity. Now we show that for each , the resolvent of A is an ψ-firmly nonexpansive mapping also.
Example 2.6 Let E be a uniformly convex Banach space and be an accretive operator. Then for each , is a ψ-firmly nonexpansive mapping defined on ().
Proof It follows from the Resolvent Identity (2.9) that
Then we have
Using the same proof techniques as Theorem 2.5, we also have
where . □
The other three similar mappings were introduced by Aoyama et al. . For a subset C of a smooth Banach space E, a mapping is of
type (P) (or firmly nonexpansive-like) if
type (Q) (or firmly nonexpansive type; see Kohsaka et al. ) if
type (R) (or firmly generalized nonexpansive) if
where J is the normalized duality mapping of E and is generalized dual pairs on .
Remark 2.7 The common point between ψ-firmly nonexpansive mapping and the above three mappings is that they all include a firmly nonexpansive mapping in Hilbert spaces as a special case. However, in a uniformly convex Banach space, each firmly nonexpansive mapping is a ψ-firmly nonexpansive mapping, but it is not one of the above three mappings since a uniformly convex Banach space may not be smooth.
Remark 2.8 In the framework of a smooth, strictly convex and reflexive Banach space, the fixed point properties of the above three mappings were studied by Aoyama et al. , Kohsaka et al. and many mathematical workers. Only in reflexive Banach space, we can obtain the fixed point property of ψ-firmly nonexpansive mappings.
3 Approximation methods of ψ-firmly nonexpansive mappings
We discuss the weak convergence of Picard iteration for ψ-firmly nonexpansive mapping.
Lemma 3.1 Let K be a nonempty closed convex subset of a Banach space E and be ψ-firmly nonexpansive with . If for any given , is defined by Picard iteration sequence
Then is an asymptotic fixed point sequence of T, i.e.
Proof Take . Then
Therefore, we have
Consequently, is non-increasing and bounded, and hence the limit exists. So, is bounded also. It follows from (3.2) that
and hence, by the property of ψ. The desired result is obtained. □
A Banach space E is said to satisfy Opial’s condition if, for any sequence in E, implies
In particular, Opial’s condition is independent of uniformly convex (smooth) since the spaces satisfy this condition for while it fails for the () spaces. In fact, spaces satisfying Opial’s condition need not even by isomorphic to uniformly convex spaces .
Theorem 3.2 Let K be a weakly compact convex subset of a Banach space E satisfying Opial’s condition and be firmly type nonexpansive. Then for any given , , defined by Picard iteration (3.1) weakly converges to some fixed point of T.
Proof It follows from Theorem 2.2 that . Then following Lemma 3.1, we see that is bounded, the limit exists for each and
The weak compactness of K means that there exists a subsequence of such that weakly converges to some point of K, say . Then using the proof technique of Theorem 2.2, we have since by Opial’s condition,
Next we show that weakly converges to . Let y is another weak limit point of and . Then we can choose a subsequence that weakly converges to y, and hence . Since exists for each , we have
a contradiction, and hence . □
Similarly, we also have the following.
Theorem 3.3 Let K be a nonempty closed convex subset of a reflexive Banach space E satisfying Opial’s condition and be firmly type nonexpansive with . Then for any given , , defined by Picard iteration (3.1) weakly converges to some fixed point of T.
Remark 3.4 Theorem 3.2 is applicable to () and . However, we do not know whether it works in for and .
Recall a Banach space E is said to have (i) a Gâteaux differentiable norm (we also say that E is smooth), if the limit
exists for each x (), ; (ii) a uniformly Gâteaux differentiable norm, if for each y in E, the limit is uniformly attained for bounded ; (iii) a Fréchet differentiable norm, if for each , , the limit is attained uniformly for bounded .
The value of at is denoted by , and the normalized duality mapping from E into is denoted by J, that is,
It is well known (see Brower [, p.44]) that for a smooth Banch space E, the normalized duality mapping J is single-valued, and, moreover,
A Banach space E is said to be (iv) strictly convex if , implies ; (v) uniformly convex if for all , such that implies whenever .
In 1979, Bruck explicitly introduced the following concept. Let Γ denote the set of strictly increasing convex functions with . A mapping T is said to be of type Γ if there exists such that for all and
Three facts about such mappings are easy to observe. Mappings of type Γ are nonexpansive; affine nonexpansive mappings are of type Γ; Mappings of type Γ have convex fixed point sets. Bruck [20, 21] showed that each nonexpansive mapping is of type Γ in a uniformly convex Banach space. See also [, Proposition 10.3] for a proof.
Theorem 3.5 Let E be a uniformly convex Banach space with a Fréchet differentiable norm and K be a nonempty closed convex subset of E. If is ψ-firmly nonexpansive with , then for any given , , defined by Picard iteration (3.1) weakly converges to some fixed point of T.
Proof It follows from Lemma 3.1 that is bounded, the limit exists for each and
Then is weakly compact. Similarly to the proof of Theorem 3.2, we only need show that has unique weak limit point. Let p and q are two weak limit points of . Then Browder Demiclosedness Principle means that . Thus both the limits and exist. Then the remainder of the proof is identical to the proof of Theorem 10.6 in Reference [, pp.114-115] with the help of the mappings of type Γ. Which is a repeat works, we omit it. □
By Theorem 2.5, the following corollary about firmly nonexpansive mappings is obvious.
Corollary 3.6 Let E be a uniformly convex Banach space with a Fréchet differentiable norm and K be a nonempty closed convex subset of E. If is firmly nonexpansive with , then for any given , , defined by Picard iteration (3.1) weakly converges to some fixed point of T.
Remark 3.7 Theorem 3.5 is dependent of Theorem 3.2 or 3.3 since the spaces satisfy Opial’s condition for while it fails for the () spaces. On the other hand, spaces satisfying Opial’s condition need not even by isomorphic to uniformly convex spaces .
Reich S: The fixed point property for nonexpansive mappings. I. Am. Math. Mon. 1976, 83: 266–268. 10.2307/2318219
Reich S: The fixed point property for nonexpansive mappings. II. Am. Math. Mon. 1980, 87: 292–294. 10.2307/2321568
Sadovski VN: Application of topological methods in the theory of periodic solutions of nonlinear differential-operator equations of neutral type. Dokl. Akad. Nauk SSSR 1971, 200: 1037–1040. (in Russian). Sov. Phys. Dokl. 12 (1971)
Alspach DE: A fixed point free nonexpansive map. Proc. Am. Math. Soc. 1981, 82: 423–424. 10.1090/S0002-9939-1981-0612733-0
Istrăţescu VI: Fixed Point Theory: An Introduction. Reidel, Dordrecht; 1981.
Kirk WA, Sims B: Examples of fixed point free mappings. In Handbook of Metric Fixed Point Theory. Kluwer Academic, Dordrecht; 2001:35–91.
Goebel K, Kirk WA Cambridge Stud. Adv. Math. 28. In Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.
Browder FE: Non-expansive nonlinear operators in Banach spaces. Proc. Natl. Acad. Sci. USA 1965, 54: 1041–1044. 10.1073/pnas.54.4.1041
Göhde D: Zum prinzip der kontraktiven abbildung. Math. Nachr. 1965, 30: 251–258. 10.1002/mana.19650300312
Kirk WA: A fixed point theorem for mappings which do not increase distances. Am. Math. Mon. 1965, 72: 1004–1006. 10.2307/2313345
Sims B, Smyth MA: On some Banach space properties sufficient for weak normal structure and their permanence properties. Trans. Am. Math. Soc. 1999, 351: 497–513. 10.1090/S0002-9947-99-01862-0
Baillon JB, Schöneberg R: Asymptotic normal structure and fixed points of nonexpansive mappings. Proc. Am. Math. Soc. 1981, 81: 257–264. 10.1090/S0002-9939-1981-0593469-1
Maurey B:Points fixes des contractions sur un convex fermé de . In Seminaire d’Analyse Fonctionelle, 1980–1981. École Polytechnique, Palaiseau; 1981.
Dowling PN, Lennard CJ, Turett B:Weak compactness is equivalent to the fixed point property in . Proc. Am. Math. Soc. 2004, 132: 1659–1666. 10.1090/S0002-9939-04-07436-2
Domínguez Benavides T, Pineda MAJ: Fixed points of nonexpansive mappings in spaces of continuous functions. Proc. Am. Math. Soc. 2005, 133: 3037–3046. 10.1090/S0002-9939-05-08149-9
Domínguez Benavides T, Pineda MAJ, Prus S: Weak compactness and fixed point property for affine mappings. J. Funct. Anal. 2004, 209: 1–15. 10.1016/j.jfa.2002.02.001
Aksoy A, Khamsi MA: Nonstandard Methods in Fixed Point Theory. Springer, Berlin; 1990.
Elton J, Lin P-K, Odell E, Szarek S: Remarks on fixed point problem for nonexpansive mappings. Contemporary Math. 18. In Fixed Points and Nonexpansive Maps Edited by: Sine R. 1983, 87–120.
Reich S: Weak convergence theorems for nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 1979, 67: 274–276. 10.1016/0022-247X(79)90024-6
Bruck RE: A simple proof of the mean ergodic theorem for nonlinear contractions in Banach spaces. Isr. J. Math. 1979, 32: 107–116. 10.1007/BF02764907
Bruck RE: On the convex approximation property and the asymptotic behavior of nonlinear contractions in Banach spaces. Isr. J. Math. 1981, 38: 304–314. 10.1007/BF02762776
Baillon JB: Un théorème de type ergodique pour les contractions non linéairs dans un espaces de Hilbert. C. R. Acad. Sci. Paris Sér. A-B 1975, 280: 1511–1541.
Bruck RE: Nonexpansive projections on subsets of Banach spaces. Pac. J. Math. 1973, 47: 341–355. 10.2140/pjm.1973.47.341
Song Y, Chai X: Halpern iteration for firmly type nonexpansive mappings. Nonlinear Anal. 2009, 71(10):4500–4506. 10.1016/j.na.2009.03.018
Aubin JP, Ekeland I: Applied Nonlinear Analysis. A Wiley-Interscience Publication. Wiley, New York; 1984.
Xu HK: Inequality in Banach spaces with applications. Nonlinear Anal. 1991, 16: 1127–1138. 10.1016/0362-546X(91)90200-K
Aoyama K, Kohsaka F, Takahashi W: Three generalizations of firmly nonexpansive mappings: their relations and continuity properties. J. Nonlinear Convex Anal. 2009, 10(1):131–147.
Kohsaka F, Takahashi W: Existence and approximation of fixed points of firmly nonexpansive type mappings in Banach spaces. SIAM J. Optim. 2008, 19(2):824–835. 10.1137/070688717
Opial Z: Weak convergence of the sequence of successive approximations for nonexpansive mappings in Banach spaces. Bull. Am. Math. Soc. 1967, 73: 591–597. 10.1090/S0002-9904-1967-11761-0
Lami Dozo E: Multivalued nonexpansive mappings and Opial’s condition. Proc. Am. Math. Soc. 1973, 38: 286–292.
Browder FE: Nonlinear operators and nonlinear equations of evolution in Banach spaces. 18. In Proc. Symp. Pure Math.. Am. Math. Soc., Providence; 1976. Part 2
Reich S: Strong convergence theorems for resolvents of accretive operators in Banach spaces. J. Math. Anal. Appl. 1980, 75: 287–292. 10.1016/0022-247X(80)90323-6
Browder FE: Semicontractive and semiaccretive nonlinear mappings in a Banach space. Bull. Am. Math. Soc. 1968, 74: 660–665. 10.1090/S0002-9904-1968-11983-4
The authors would like to thank the editor and the anonymous referee for useful comments and valuable suggestions on the language and structure of our manuscript. This work is partially supported by the National Natural Science Foundation of P.R. China (Grant Nos. 11171094, 11271112) and partially done when he was visiting The Hong Kong Polytechnic University.
The authors declare that they have no competing interests.
The work presented here was carried out in collaboration between all authors. All authors contributed equally and significantly to writing this manuscript. All authors have contributed to, seen and approved the final manuscript.
About this article
Cite this article
Song, Y., Huang, Y. Fixed point property and approximation of a class of nonexpansive mappings. Fixed Point Theory Appl 2014, 81 (2014). https://doi.org/10.1186/1687-1812-2014-81
- ψ-firmly nonexpansive mappings
- fixed points
- reflexive Banach spaces
- Picard iteration |
A dog is looking for a good place to bury his bone. Can you work out where he started and ended in each case? What possible routes could he have taken?
Can you see why 2 by 2 could be 5? Can you predict what 2 by 10 will be?
Can you shunt the trucks so that the Cattle truck and the Sheep truck change places and the Engine is back on the main line?
What is the best way to shunt these carriages so that each train can continue its journey?
A 3x3x3 cube may be reduced to unit cubes in six saw cuts. If after every cut you can rearrange the pieces before cutting straight through, can you do it in fewer?
Here is a solitaire type environment for you to experiment with. Which targets can you reach?
Imagine a wheel with different markings painted on it at regular intervals. Can you predict the colour of the 18th mark? The 100th mark?
If you have only 40 metres of fencing available, what is the maximum area of land you can fence off?
You have 4 red and 5 blue counters. How many ways can they be placed on a 3 by 3 grid so that all the rows columns and diagonals have an even number of red counters?
Can you mark 4 points on a flat surface so that there are only two different distances between them?
In a square in which the houses are evenly spaced, numbers 3 and 10 are opposite each other. What is the smallest and what is the largest possible number of houses in the square?
A cylindrical helix is just a spiral on a cylinder, like an ordinary spring or the thread on a bolt. If I turn a left-handed helix over (top to bottom) does it become a right handed helix?
A tetromino is made up of four squares joined edge to edge. Can this tetromino, together with 15 copies of itself, be used to cover an eight by eight chessboard?
Hover your mouse over the counters to see which ones will be removed. Click to remover them. The winner is the last one to remove a counter. How you can make sure you win?
How can you make an angle of 60 degrees by folding a sheet of paper twice?
This task, written for the National Young Mathematicians' Award 2016, involves open-topped boxes made with interlocking cubes. Explore the number of units of paint that are needed to cover the boxes. . . .
A bus route has a total duration of 40 minutes. Every 10 minutes, two buses set out, one from each end. How many buses will one bus meet on its way from one end to the other end?
Triangular numbers can be represented by a triangular array of squares. What do you notice about the sum of identical triangle numbers?
An irregular tetrahedron is composed of four different triangles. Can such a tetrahedron be constructed where the side lengths are 4, 5, 6, 7, 8 and 9 units of length?
These are pictures of the sea defences at New Brighton. Can you work out what a basic shape might be in both images of the sea wall and work out a way they might fit together?
Start with a large square, join the midpoints of its sides, you'll see four right angled triangles. Remove these triangles, a second square is left. Repeat the operation. What happens?
ABCD is a regular tetrahedron and the points P, Q, R and S are the midpoints of the edges AB, BD, CD and CA. Prove that PQRS is a square.
ABCDEFGH is a 3 by 3 by 3 cube. Point P is 1/3 along AB (that is AP : PB = 1 : 2), point Q is 1/3 along GH and point R is 1/3 along ED. What is the area of the triangle PQR?
Imagine you are suspending a cube from one vertex (corner) and allowing it to hang freely. Now imagine you are lowering it into water until it is exactly half submerged. What shape does the surface. . . .
How many different triangles can you make on a circular pegboard that has nine pegs?
Charlie and Alison have been drawing patterns on coordinate grids. Can you picture where the patterns lead?
Lyndon Baker describes how the Mobius strip and Euler's law can introduce pupils to the idea of topology.
Slide the pieces to move Khun Phaen past all the guards into the position on the right from which he can escape to freedom.
A huge wheel is rolling past your window. What do you see?
A game for 2 players. Can be played online. One player has 1 red counter, the other has 4 blue. The red counter needs to reach the other side, and the blue needs to trap the red.
Can you make a 3x3 cube with these shapes made from small cubes?
A useful visualising exercise which offers opportunities for discussion and generalising, and which could be used for thinking about the formulae needed for generating the results on a spreadsheet.
10 space travellers are waiting to board their spaceships. There are two rows of seats in the waiting room. Using the rules, where are they all sitting? Can you find all the possible ways?
Design an arrangement of display boards in the school hall which fits the requirements of different people.
Can you work out how many cubes were used to make this open box? What size of open box could you make if you had 112 cubes?
How could Penny, Tom and Matthew work out how many chocolates there are in different sized boxes?
This 100 square jigsaw is written in code. It starts with 1 and ends with 100. Can you build it up?
Cut four triangles from a square as shown in the picture. How many different shapes can you make by fitting the four triangles back together?
Can you find a way of representing these arrangements of balls?
In the game of Noughts and Crosses there are 8 distinct winning lines. How many distinct winning lines are there in a game played on a 3 by 3 by 3 board, with 27 cells?
Bilbo goes on an adventure, before arriving back home. Using the information given about his journey, can you work out where Bilbo lives?
What is the shape of wrapping paper that you would need to completely wrap this model?
A Hamiltonian circuit is a continuous path in a graph that passes through each of the vertices exactly once and returns to the start. How many Hamiltonian circuits can you find in these graphs?
Investigate how the four L-shapes fit together to make an enlarged L-shape. You could explore this idea with other shapes too.
Watch these videos to see how Phoebe, Alice and Luke chose to draw 7 squares. How would they draw 100?
Can you fit the tangram pieces into the outlines of the workmen?
This article for teachers discusses examples of problems in which there is no obvious method but in which children can be encouraged to think deeply about the context and extend their ability to. . . .
Can you make sense of the charts and diagrams that are created and used by sports competitors, trainers and statisticians?
A game for 2 players. Given a board of dots in a grid pattern, players take turns drawing a line by connecting 2 adjacent dots. Your goal is to complete more squares than your opponent.
Can you fit the tangram pieces into the outlines of the watering can and man in a boat? |
Place the numbers 1 to 6 in the circles so that each number is the difference between the two numbers just below it.
Place the numbers 1 to 10 in the circles so that each number is the difference between the two numbers just below it.
Use the clues to colour each square.
If you hang two weights on one side of this balance, in how many different ways can you hang three weights on the other side for it to be balanced?
Have a go at this well-known challenge. Can you swap the frogs and toads in as few slides and jumps as possible?
Can you find all the ways to get 15 at the top of this triangle of numbers? Many opportunities to work in different ways.
This problem is based on a code using two different prime numbers less than 10. You'll need to multiply them together and shift the alphabet forwards by the result. Can you decipher the code?
In this problem it is not the squares that jump, you do the jumping! The idea is to go round the track in as few jumps as possible.
Find the sum and difference between a pair of two-digit numbers. Now find the sum and difference between the sum and difference! What happens?
Place six toy ladybirds into the box so that there are two ladybirds in every column and every row.
What do the numbers shaded in blue on this hundred square have in common? What do you notice about the pink numbers? How about the shaded numbers in the other squares?
How many ways can you find to do up all four buttons on my coat? How about if I had five buttons? Six ...?
There are 78 prisoners in a square cell block of twelve cells. The clever prison warder arranged them so there were 25 along each wall of the prison block. How did he do it?
Can you put the numbers 1 to 8 into the circles so that the four calculations are correct?
This task follows on from Build it Up and takes the ideas into three dimensions!
This challenge focuses on finding the sum and difference of pairs of two-digit numbers.
This challenge is about finding the difference between numbers which have the same tens digit.
Only one side of a two-slice toaster is working. What is the quickest way to toast both sides of three slices of bread?
Ten cards are put into five envelopes so that there are two cards in each envelope. The sum of the numbers inside it is written on each envelope. What numbers could be inside the envelopes?
Can you put the numbers from 1 to 15 on the circles so that no consecutive numbers lie anywhere along a continuous straight line?
Your challenge is to find the longest way through the network following this rule. You can start and finish anywhere, and with any shape, as long as you follow the correct order.
My briefcase has a three-number combination lock, but I have forgotten the combination. I remember that there's a 3, a 5 and an 8. How many possible combinations are there to try?
There are nine teddies in Teddy Town - three red, three blue and three yellow. There are also nine houses, three of each colour. Can you put them on the map of Teddy Town according to the rules?
Alice and Brian are snails who live on a wall and can only travel along the cracks. Alice wants to go to see Brian. How far is the shortest route along the cracks? Is there more than one way to go?
What do the digits in the number fifteen add up to? How many other numbers have digits with the same total but no zeros?
Add the sum of the squares of four numbers between 10 and 20 to the sum of the squares of three numbers less than 6 to make the square of another, larger, number.
You have two egg timers. One takes 4 minutes exactly to empty and the other takes 7 minutes. What times in whole minutes can you measure and how?
Katie had a pack of 20 cards numbered from 1 to 20. She arranged the cards into 6 unequal piles where each pile added to the same total. What was the total and how could this be done?
Can you work out how to balance this equaliser? You can put more than one weight on a hook.
This task, written for the National Young Mathematicians' Award 2016, involves open-topped boxes made with interlocking cubes. Explore the number of units of paint that are needed to cover the boxes. . . .
The planet of Vuvv has seven moons. Can you work out how long it is between each super-eclipse?
Can you work out how many cubes were used to make this open box? What size of open box could you make if you had 112 cubes?
Ben and his mum are planting garlic. Can you find out how many cloves of garlic they might have had?
Chandra, Jane, Terry and Harry ordered their lunches from the sandwich shop. Use the information below to find out who ordered each sandwich.
I was in my car when I noticed a line of four cars on the lane next to me with number plates starting and ending with J, K, L and M. What order were they in?
Find the product of the numbers on the routes from A to B. Which route has the smallest product? Which the largest?
There were chews for 2p, mini eggs for 3p, Chocko bars for 5p and lollypops for 7p in the sweet shop. What could each of the children buy with their money?
Zumf makes spectacles for the residents of the planet Zargon, who have either 3 eyes or 4 eyes. How many lenses will Zumf need to make all the different orders for 9 families?
Kate has eight multilink cubes. She has two red ones, two yellow, two green and two blue. She wants to fit them together to make a cube so that each colour shows on each face just once.
This dice train has been made using specific rules. How many different trains can you make?
Arrange eight of the numbers between 1 and 9 in the Polo Square below so that each side adds to the same total.
Tim's class collected data about all their pets. Can you put the animal names under each column in the block graph using the information?
Can you rearrange the biscuits on the plates so that the three biscuits on each plate are all different and there is no plate with two biscuits the same as two biscuits on another plate?
Can you work out the arrangement of the digits in the square so that the given products are correct? The numbers 1 - 9 may be used once and once only.
Can you put plus signs in so this is true? 1 2 3 4 5 6 7 8 9 = 99 How many ways can you do it?
This problem is based on the story of the Pied Piper of Hamelin. Investigate the different numbers of people and rats there could have been if you know how many legs there are altogether!
Can you fill in this table square? The numbers 2 -12 were used to generate it with just one number used twice.
This task, written for the National Young Mathematicians' Award 2016, invites you to explore the different combinations of scores that you might get on these dart boards.
In how many ways could Mrs Beeswax put ten coins into her three puddings so that each pudding ended up with at least two coins?
This task, written for the National Young Mathematicians' Award 2016, focuses on 'open squares'. What would the next five open squares look like? |
« ForrigeFortsett »
7 And, putting
=p, we shall have x=17p+7. Which value of x, being, substituted in the second frae.
17p+7-13_17p-- 6. dion, gives
Where, by rejecting p, there remains P+18
= wh. = 9
Therefore p=26r– 18 ; Whence, if r be taken =1, we shall have p=8. And consequently x=17p+7=17X8+7=143, the number sought.
2. It is required to find the least whole number, which, being divided by 11, 19, and 29, shall leave the remain. ders 3, 5, and 10, respectively.
Let x= the number required.
2-3 2-5 Then
and 11 19
And, putting ***=p, we shall have a=11p+3.
Which value of x, being, substituted in the second frac
11p-2 tion, gives =wh.
And, by rejecting p, there will remain 3p.--4
= wh, which put = 19 19 19
Then we shall have p=19-5, and x=11(19r-5)+3=209r-52. And if this value be substituted for x in the third fraction, there will arise 209r-62
29 Or, by neglecting 75-2, we shall have the remaining
6r - 4
g-20 Or, by rejecting r, there will remain who
-29 which put =s.
Then r=298 +20; where, by taking s=0, we shall have r=20.
x=209r -52=209 X 20-52=4128, the number required,
3. To find a number, which, being divided by 6, shall leave the remainder 2, and when divided by 13, shall leave the remainder 3.
Ans. 68 4. It is required to find a number, which being divided by 7, shall leave 5 for a remainder, and if divided by 9, the remainder shall be 2.
Ans. 110 5. It is required to find the least whole number, which, being divided by 39, shall leave the remainder 16, and when divided by 56, the remainder shall be 27.
Ans. 1147 6. It is required to find the least whole number, which, being divided by 7, 8, and 9, respectively, shall leave the remainders 5, 7, and 8.
Ans. 1727 7. It is required to find the least whole number, which, being divided by each of the pine digits, 1, 2, 3, 4, 5, 6, 7, 8, 9, shall leave no remainders.
Ans. 2520 8. A person receiving a box of oranges, observed, that, when he told them out by 2, 3, 4, 5, and 6 at a time, he had none remaining; but when he told them out by 7 at a time, there remained 5; how many oranges were there in the box ?
This branch of Algebra, which is so called from its inventor, Diophantus, a Greek mathematician of Alexandria in Egypt, who flourished in or about the third century after Christ, relates chiefly to the finding of square and cube numbers, or to the rendering certain compound expressions free from surds; the method of doing which is by making such substitutions for the upknowo quantity, as will reduce the resulting equation to
a simple one, and then finding the value of that quantity in terms of the rest. (p)
These questions are so exceedingly curious and ab. struse, that nothing less than the most refined Algebra, applied with the utmost skill and judgment, can surmount the difficulties which attend them. And, in this respect, no one has extended the limits of the analytic art further than Diophantus, or discovered greater knowledge and penetration in the application of it.
When we consider his work with attention, we are
(0) That Diophantus was not the inventor of Algebra, as has been generally imagined, is obvious ; since his method of apply. ing it is such, as could only have been used in a very advanced state of the science ; besides which, he no where speaks of the fundamental rules and principles, as an inventor certainly would have done, but treats of it as an art already sufficiently known ; and seems to intend, not so much to teach it, as to cultivate and improve it, by solving such questions as, before his time, had been thought too difficult to be surmounted.
It is highly probable, therefore, that Algebra was known among the Greeks, long before the time of Diophantus ; but that the works of preceding writers have been destroyed by the ravages of time, or the depredations of war and barbarism.
His Arithmetical Questions, out of which these problems were mostly collected, consisted originally of thirteen books; but the first six only are now extant ; the best edition of which is that published at Paris, by Bachet, in the year 1670, with Notes by Fermat. In this work, the subject is so skilfully handled, that the moderns, notwithstanding their other improvements, have been able to do little more than explain and illustrate his method.
Those who have succeeded best in this respect, are Vieta, Ker. sey, De Billy, Ozanam, Prestet, Saunderson, Fermat, and Euler ; the last of whom, in particular, has amplified and illustrated the Diophantine Algebra in as clear and satisfactory a manner as tåe subject seems to admit of.
The reader will find a methodical abstract of the several me. thods made use by these writers, with riety of examples to illustrate them, in the first and second volumes of my Treatise of Algebra, before quoted.
at a loss which to admire most, his wonderful sagacity, and the peculiar artifices he employs, in forming such positions as the nature of the problems required or the more than ordinary subtility of bis reasoning upon them.
Every particular question puts us upon a new way of thinking, and furnishes a fresh vein of any analytical treasure, which cannot but prove highly useful to the mind, in conducting it through other difficulties of this kind, whenever they may occur, but, also, in enabling it to encounter, more readily, those that may arise in subjects of a different nature.
The following method of resolving these questions will be found of considerable service ; but no general
; rule can be given, that will suit all cases ; and therefore the solution must often be left to the ingenuity and skill of the learner.
1. Put for the root of the square or cube required, one or more letters such that, when they are involved, either the given number, or the highest power of the unknown quantity, may vanish from the equation ; and then if the unknown quantity be only of one dimension, the problem will be solved by reducing the equation.
2. But if the unknown quantity ke still a square, or a higher power, some other new letters must be assumed to denote the root ; with which proceed as before ; and so on, till the unknown quantity, is but of one dimension ; when, from this, all the rest may be determined.
1. To divide a given square number (100) into two such parts, that each of them may be a square number. (9)
(9) If x-10 had been made the side of the second square, the following solution of this question, instead of 24-10; the |
US 4630299 A
A simple digital filter having only a single multiplier operates to extract the pilot tone from a demodulated and digitized FM stereo signal. The number of poles in the filter is chosen so that the filter output is also usable to produce a digitized representation of the carrier tone for the left minus right stereo channel. This signal is in turn used to decode the input representation into the desired left and right channel signals. The circuit of the present invention is particularly amenable to fabrication on an integrated circuit chip.
1. A digital circuit for decoding a demodulated and digitized FM stereo signal comprising at least stereo carrier, L+R, and L-R signals into digitized left and right channel components, said circuit comprising:
a digital filter for receiving said digitized FM stereo signal as an input sequence x(nT) and generating an output sequence y(nT), said filter having a z-transform transfer function substantially equal to ##EQU4## where N is a real number greater than 2 and H is a scale factor, said filter operating a sampling rate with period T corresponding to a sampling rate which is substantially equal to 228 kHz;
a first digital multiplier circuit for forming the product of the output of said filter at time nT, y(nT), with the output of said filter at time (n-3)T, y(nT-3T), so as to form a digitized representation of the stereo carrier signal, c(nT); and
a second digital multiplier circuit for forming the product of the input to said filter at time nT, x(nT), with the output of said first digital multiplier at time nT, c(nT), so as to form a digitized representation of the L-R signal.
2. The signal circuit of claim 1 further comprising:
a digital summation circuit to form the sum of said L-R signal from said second digital multiplier and said input signal x(nT), whereby a signal representing said left stereo channel information is produced; and
a digital subtraction circuit to form the difference between said L-R signal and said input signal x(nT), whereby a signal representing said right stereo channel information is produced.
3. The circuit of claim 1 in which N is an integer.
4. The circuit of claim 1 in which N=8.
5. A digital filter circuit for receiving an input sequence x(nT) and generating an output sequence y(nT), said filter having a z-transform transfer function substantially equal to ##EQU5## where N is a real number greater than 2 and H is a scale factor, said filter operating at a sampling rate with period T which corresponds to a sampling rate which is substantially equal to 228 kHz.
The present invention relates to digital circuits and in particular to digital filters for decoding demodulated FM stereo signals into left and right channels. Additionally, the digital circuits of the present invention are particularly directed to simplified circuits for stereo channel decoding which are particularly amenable to fabrication on integrated circuit chips, either alone or on the same chip with other FM and/or AM signal processing circuitry.
In stereo FM broadcasts, three fundamental signals are transmitted. One part of the signal spectrum is allotted to transmission of a signal representing the sum of the left and right channels. Another part of the spectrum is allotted to the transmission of a signal representing the difference between the left and right channels. A third part of the standard FM stereo broadcast signal includes a 19 kHz pilot tone. This tone is used in demodulating the signal into left and right channel portions.
The present invention is particularly directed to that part of the circuitry which receives a demodulated signal which has already been digitized. As used herein and in the appended claims, the term digitized refers to the conversion of periodically sampled analog signals into equivalent binary number representations. Typically each analog sample is converted into a representation in terms of a sequence of binary digits. However, it is noted that while the analog samples are typically converted into a binary representation in which each position in the representation corresponds to a particular weighting factor which is a power of two, other number representational systems may be employed without departing from the spirit of the invention which is disclosed herein.
With respect to this invention, it is noted that it is directed to a circuit which receives already demodulated and digitized signals in which both left and right channel information is present. Accordingly, it is the function of the circuit of the present invention to produce digitized output signals representing extracted left channel and right channel information.
Conventionally, recovery of the 19 kHz pilot tone in FM receivers is accomplished using totally analog circuitry and design principles. These principles typically involve the use of a phase locked loop which locks onto the 19 kHz tone with a 38 kHz oscillator whose output is applied to a frequency divider which divides the frequency by a factor of two. Recovery of the pilot tone is essential for separating the left and right channel information signals.
However, it is not enough simply to provide a digital filter whose frequency response is such that the 19 kHz tone is passed through unattenuated while substantially all other frequencies are rejected. In order to provide the mechanism for producing the desired 38 kHz tone for ultimate channel separation, it is necessary that proper phase relationships in the signal output be present. Furthermore, while it is known that it is relatively easy to construct digital circuitry for performing operations such as addition and subtraction, it is also known that it is correspondingly much more difficult to provide digital circuitry for operations such as multiplication. Accordingly, one of the desirable features of an appropriate digital filter is an implementation in which a minimal number of multiplication operations is to be performed. In the preferable case in which the circuitry of the present invention is implemented on an integrated circuit chip, the problems of chip size and "real estate" also dictate that there be as few digital multiplication circuits as possible to conserve both space and power.
In accordance with a preferred embodiment of the present invention, a digital filter having a special form to reduce the number of multiplications required is employed to produce a digitized representation of the 19 kHz pilot tone together with another sampled output representing the 19 kHz tone shifted by a phase angle of 90°. This allows the production of the desired 38 kHz stereo carrier signal which is used to resolve the incoming signal into left and right channel components. In the preferred embodiment of the present invention, the digital filter is operated at a sampling rate of 228 kHz which is 12 times the 19 kHz frequency of the pilot tone. Operation at this sampling rate is very beneficial in terms of circuit simplicity particularly in terms of the need for a minimal number of multiplier circuits. A first digital multiplier circuit is used to form the product of the output of the digital filter at a certain time with the output of the filter at an earlier time to produce the digitized representation of the 38 kHz stereo carrier signal. This signal is then employed in connection with a second digital multiplier whose output represents a digitized version of the "left minus right" signal. A simple summer and subtractor are thereafter employed to extract the left and right channels in distinct signal paths.
Accordingly, it is an object of the present invention to provide a digital circuit for decoding demodulated and digitized FM stereo signals into left and right channel components.
It is also an object of the present invention to provide a digital circuit for stereo FM decoding which employs a minimal number of multiplier circuits.
It is yet another object of the present invention to provide a digital channel separation circuit which is amenable to fabrication on an integrated circuit chip, either by itself or as part of other decoding and demodulation circuitry.
It is still another object of the present invention to contribute to digital processing of FM signals.
Lastly, but not limited hereto, it is an object of the present invention to provide a digital circuit for FM stereo signal decoding which requires small space, low power and is readily fabricated on a single integrated circuit chip.
The subject matter which is regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of practice, together with further objects and advantages thereof, may best be understood by reference to the following description taken in connection with the accompanying drawings in which:
FIG. 1 is a plot of the spectrum of a typical demodulated FM stereo signal as a function of frequency, f;
FIG. 2 is a pole-zero diagram in the complex plane representing the z-transform function for the preferred embodiment of the filter of the present invention;
FIG. 3 is a flow diagram describing the desired physical implementation of the present invention.
A proper understanding of the operation of the present invention can only be had with a knowledge of the spectral distribution of the signal which is applied to the circuit of the present invention. A demodulated FM signal includes a essentially the three distinct components referred to above. In particular, the demodulated FM signal includes, in the frequency range from 0 to 15 kHz, a signal representing the sum of the left and right stereo channels, L+R. At a frequency of 19 kHz±2 Hz, as required by current FCC regulations, there is provided a 19 kHz pilot tone. Centered around a frequency of 38 kHz, which is twice the pilot tone frequency, left minus right channel information, L-R, is encoded in a double side band suppressed carrier (DSSC) signal. The left minus right channel information typically lies in a frequency band of from 23 kHz to 53 kHz, as shown. The spectrum of the signal shown in FIG. 1 is then essentially the spectrum of the signal which is provided as an input to the present invention, albeit in digitized form.
The design of the digital filter circuit of the present invention is based upon the design of a digital filter for extracting a digitized representation of the 19 kHz pilot tone. However, to accomplish the objectives of the present invention, it is not enough simply to apply conventional design methodologies to produce a digital filter having an extremely narrow pass band at a frequency of 19 kHz. In particular, the present inventors have selected a sampling rate for their digital filter which results in a greatly simplified implementation. Moreover, not only must the filter of the present invention produce a digitized sinusoidal 19 kHz signal, but it must also operate in such a way as to be able to readily produce a cosinusoidal version of the same signal. In order to provide circuit simplicity, it is also necessary to select additional pole locations for the z transform transfer function describing the digital filter in a special fashion. Moreover, once having selected additional transform poles for circuit simplification purposes, it also becomes necessary to select nearby transfer function zeros to produce a transfer function which effectively has only a single narrow passband centered at a frequency of 19 kHz.
FIG. 2 represents a preferred embodiment of the digital filter of the present invention for the case in which the number M of poles chosen is 6. In particular, the diagram shown in FIG. 2 is a pole-zero plot of the z transform H(z) of the preferred embodiment of the digital filter of the present invention. In this particular case, the number M of poles of H(z) is 6 and the sampling rate is 228 kHz which is equal to 2M×19 kHz. In the diagram of FIG. 2 as is the convention, transfer function zeros are designated with O's and transfer function poles are designated by X's. As is also known in the arts relating to digital signal processing, various points along the unit circle shown correspond to different values of angular frequency ω. In particular, an angle of 0° corresponds to a frequency of 0 Hz. As one moves in a counterclockwise direction as measured from the positive real axis along the unit circle, the value of ω increases monotonically with the angle at 180° corresponding to a frequency of 114 kHz and with an angle of 360 corresponding to a frequency of 228 kHz in the present design. Since it is an object of the present invention to have a large response at a frequency of 19 kHz, it is therefore seen that it is necessary to have a pole located at an angle of 30° with respect to the positive real axis since 30° is to 360° as 19 kHz is to 228 kHz. However, as is also known in the digital signal processing arts, it is not desirable to have poles located directly on the unit circle since such pole locations lead to instability. Accordingly, the pole shown at an angle of 30° is located just within the unit circle. For reasons which will become clearer later, the pole shown at the 30° angle (and other poles also) preferably lies at a distance of 1-22-N from the origin. N can be any real number greater than 2 but is preferably an integer such as 8. Since the poles or zeros are required to occur in complex conjugate pairs for creating realizable circuits, a second matching pole is required at an angle of 330°.
If producing a sharp response peak at a frequency of 19 kHz were the only design objective of the present invention, one might be satisfied with poles at only those two locations, namely, at 30° and 330°. However, for purposes of circuit simplification, the filter of the present invention employs a number of poles distributed uniformly around a circle which lies just within the unit circle. In the present case, because the sampling rate is 228 kHz, a total of M=6 poles is selected, each pole being separated from its adjacent pole on the inner circle by an angle of 360°/M. This distribution of poles provides a transfer function whose denominator is of the form zM +β. As discussed above, β is typically selected to be of the form 1-22-N. However, the inclusion of the additional poles at angles of 90°, 150°, 210° and 270° would normally produce undesirable components in the filter output. In other words, with these poles alone being present, the filter would act to pass frequencies other than those desired. Accordingly, the design of the present invention employs zeros at these angles at locations adjacent to the poles to mitigate their effects. Accordingly, transfer functions zeros are provided at angles of 90°, 150°, 210°, and 270° as shown in FIG. 2. However, these zeros are located on the unit circle itself. Such locations also serve to promote circuit simplicity in that the algebraic expansion of the factors involving these zeros do not result in the generation of coefficients requiring digital multiplication circuitry. In general, the zeros also occur in complex conjugate pairs and exhibit sufficient symmetry to promote implementation without excessive multiplication. However, the zeros situated at angles of 150° and 210° do not have correspondingly symmetrically located zeros in the right half plane. To place such zeros there would naturally defeat the 19 kHz selectivity of the filter. Additionally, zeros are provided at z=+1 and z=-1 to eliminate dc components and to further promote circuit simplicity.
Accordingly, it is seen that the preferred digital filter embodiment of the present invention possesses a z-transform transfer function which is substantially equal to ##EQU1## wherein N is a real number greater than 2 and H is a scale factor. The above equation can be expanded and both the numerator and denominator divided by a factor of z6. Doing so results in the following: ##EQU2## The second formula above for H(z) provides a much more direct form for indicating the input/output relationship for the digital filter. In particular, it is seen that ##EQU3## In the above N=8. The scale factor H is 0.00282. In the above equation the sampling period T has been suppressed for simplicity in accordance with often employed inventions. The equation above for (n) is written in a form which is readily implementable in digital circuitry.
The particular circuit for implementing the above digital filter is shown in FIG. 3. For example, the circuit of FIG. 3 includes a clocked shift register for storing digitally represented values x(n) through x(n-6). Additionally, the output y(n) is also supplied to a set of clocked shift registers and similarly labeled. The digital circuit of FIG. 3 also includes adders, subtractors, and multipliers as shown. A shifter is also shown which substitutes for a multiplication step. The digital filter portion of the circuit shown in FIG. 3 is essentially that part of the circuit between the shift registers for the x values and the shift registers for the y values, inclusive. It is readily seen that the output y(n) is generated by the circuit shown. In particular, terms associated with x(n-1) and x(n-5) have been grouped together in a difference operation prior to multiplication by 1.732 which, in four significant digits, represents the decimal version of √3. This grouping is therefore seen to reduce the number of digital multipliers required. It is also seen that the digital filter portion of the present invention whose embodiment is shown in FIG. 3, includes only this single multiplier. This is a significant advantage of the present invention. Additionally, by choosing N to be an integer greater than 2 in the above equation for H(z), it is possible to employ a shifter in place of a multiplier. Again, as is well known, multiplication by a power of 2 is readily accomplished in positional binary systems by a simple shift operation. The value N controls the proximity of the poles and zeros which lie adjacent to one another in FIG. 2. The higher the value of N, the closer the poles and zeros become. However, for practical implementation and avoidance of stability problems, N is preferably chosen in the present invention to be an integer near 8.
Accordingly, it is seen that the digital filter circuit described above produces as an output y(n) a digitized representation of the 19 kHz pilot tone. Since the filter operates at a sampling rate of 228 kHz, each sample y(n) through y(n-6) is separated by a phase angle of 30° since 2M, which is equal to 12, divides 360° into 30° segments. Accordingly, the sample taken at y(n-3) represents a cosinusoidal signal since it exists in a phase shift relationship of 90° with respect to y(n). Use is now made of the following trigonometric identity:
Sin X Cos Y=1/2[Sin (X+Y)+Sin (X-Y)].
In the particular case at hand, X=Y and it is seen that Sin X Cos Y=1/2 Sin 2X. Accordingly, the present invention employs a multiplier to form the product of y(n) and y(n-3) to produce the resulting signal c(n) which represents a 38 kHz stereo carrier signal. In general, from the above trigonometric identity, it is seen that multiplication operates to produce the sum of two signals, one of which is based upon the sum of the two frequencies, and the other of which is based upon the difference of the two frequencies. It is to be noted in the filter described above that the selection of the number of poles M=6 for the digital filter provides an output signal y(n-3) which exhibits the proper phase with respect y(n) to enable the production of the 38 kHz stereo carrier signal c(n) as the output of a digital multiplier circuit.
The generation of the signal c(n) enables the production of the signal labeled L-R in FIG. 3 which represents the algebraic difference between the left and right stereo channels. Based upon the principles incorporated in the above mentioned trigonometric identity, multiplication by the 38 kHz stereo carrier signal operates to shift the L-R portion of the incoming signal into the frequency range between 0 and 15 kHz. It also operates to frequency shift this portion of the signal into a band centered around 76 kHz, but since such frequency ranges are inaudible, this portion of the resulting signal shown can be ignored. In a like manner, the spectral information centered around 38 kHz in the incoming digitized and demodulated FM signal can also be ignored since it is also likewise inaudible to the human ear or may be filtered out later. The only relevant portion then of the input signal is its L+R portion. Accordingly, a simple digital summer is provided which adds the L-R signal generated above to the incoming signal containing the L+R portion to produce a signal which substantially represents the digitized portion of the left channel. In a similar manner, a subtractor is also employed in the manner shown in FIG. 3 to produce the right channel. Symbolically, this is written as (L+R)-(L-R)=2R. In this manner then, the digital circuit of the present invention operates to decode the digitized and demodulated FM signal into its left and right hand stereo components.
Accordingly, it is seen that the digital circuit of the present invention provides a means for digitally decoding regularly broadcast FM stereo signals using digital circuitry which incorporates only three digital multipliers. In fact, the digital filter portion of the circuit of the present invention incorporates only a single digital multiplier. Such simply implementable filters are therefore readily seen to be incorporated in integrated circuits for FM stereo decoding purposes. It is also seen that such circuits require less chip area and less electrical power. While the circuit shown in FIG. 3 is the circuit of preference in the present invention, it is also noted that the shifter and subtractor which receive inputs from the register containing y(n-6) could be replaced by an additional multiplier circuit in the case that N is not an integer value. Additionally, it is noted that the discussion above has not at all referred to the number of bits employed the various registers, shifters, adders, subtractors, and multipliers shown in FIG. 3. Various values may be employed without departing from either the scope or spirit of the present invention. However, for the purposes of separating FM stereo signals, 8 to 15 bits may be employed with 10 or 12 bits being preferred for this particular application. It is also noted that the sampling rate preferably employed herein is well above the Nyquist rate required for accurate signal reproduction. It is also noted that the output signals from the left and right channels in the present invention are typically supplied to digital-to-analog conversion devices and low pass filters for removing high frequency content which may be undesirably present as a result of the quantization and conversion processes.
While the invention has been described in detail herein in accord with certain preferred embodiments thereof, many modifications and changes therein may be effected by those skilled in the art. Accordingly, it is intended by the appended claims to cover all such modifications and changes as fall within the true spirit and scope of the invention. |
By Thomas A. Whitelaw B.Sc., Ph.D. (auth.)
One A process of Vectors.- 1. Introduction.- 2. Description of the process E3.- three. Directed line segments and place vectors.- four. Addition and subtraction of vectors.- five. Multiplication of a vector through a scalar.- 6. part formulation and collinear points.- 7. Centroids of a triangle and a tetrahedron.- eight. Coordinates and components.- nine. Scalar products.- 10. Postscript.- workouts on bankruptcy 1.- Matrices.- eleven. Introduction.- 12. easy nomenclature for matrices.- thirteen. Addition and subtraction of matrices.- 14. Multiplication of a matrix by way of a scalar.- 15. Multiplication of matrices.- sixteen. houses and non-properties of matrix multiplication.- 17. a few exact matrices and kinds of matrices.- 18. Transpose of a matrix.- 19. First concerns of matrix inverses.- 20. homes of nonsingular matrices.- 21. Partitioned matrices.- workouts on bankruptcy 2.- 3 common Row Operations.- 22. Introduction.- 23. a few generalities relating hassle-free row operations.- 24. Echelon matrices and lowered echelon matrices.- 25. hassle-free matrices.- 26. significant new insights on matrix inverses.- 27. Generalities approximately structures of linear equations.- 28. straightforward row operations and structures of linear equations.- routines on bankruptcy 3.- 4 An advent to Determinants.- 29. Preface to the chapter.- 30. Minors, cofactors, and bigger determinants.- 31. easy homes of determinants.- 32. The multiplicative estate of determinants.- 33. one other procedure for inverting a nonsingular matrix.- routines on bankruptcy 4.- 5 Vector Spaces.- 34. Introduction.- 35. The definition of a vector area, and examples.- 36. basic outcomes of the vector house axioms.- 37. Subspaces.- 38. Spanning sequences.- 39. Linear dependence and independence.- forty. Bases and dimension.- forty-one. additional theorems approximately bases and dimension.- forty two. Sums of subspaces.- forty three. Direct sums of subspaces.- workouts on bankruptcy 5.- Six Linear Mappings.- forty four. Introduction.- forty five. a few examples of linear mappings.- forty six. a few undemanding evidence approximately linear mappings.- forty seven. New linear mappings from old.- forty eight. photo house and kernel of a linear mapping.- forty nine. Rank and nullity.- 50. Row- and column-rank of a matrix.- 50. Row- and column-rank of a matrix.- fifty two. Rank inequalities.- fifty three. Vector areas of linear mappings.- routines on bankruptcy 6.- Seven Matrices From Linear Mappings.- fifty four. Introduction.- fifty five. the most definition and its rapid consequences.- fifty six. Matrices of sums, and so forth. of linear mappings.- fifty six. Matrices of sums, and so on. of linear mappings.- fifty eight. Matrix of a linear mapping w.r.t. diverse bases.- fifty eight. Matrix of a linear mapping w.r.t. varied bases.- 60. Vector area isomorphisms.- workouts on bankruptcy 7.- 8 Eigenvalues, Eigenvectors and Diagonalization.- sixty one. Introduction.- sixty two. attribute polynomials.- sixty two. attribute polynomials.- sixty four. Eigenvalues within the case F = ?.- sixty five. Diagonalization of linear transformations.- sixty six. Diagonalization of sq. matrices.- sixty seven. The hermitian conjugate of a posh matrix.- sixty eight. Eigenvalues of distinct kinds of matrices.- workouts on bankruptcy 8.- 9 Euclidean Spaces.- sixty nine. Introduction.- 70. a few user-friendly effects approximately euclidean spaces.- seventy one. Orthonormal sequences and bases.- seventy two. Length-preserving ameliorations of a euclidean space.- seventy three. Orthogonal diagonalization of a true symmetric matrix.- routines on bankruptcy 9.- Ten Quadratic Forms.- seventy four. Introduction.- seventy five. switch ofbasis and alter of variable.- seventy six. Diagonalization of a quadratic form.- seventy seven. Invariants of a quadratic form.- seventy eight. Orthogonal diagonalization of a true quadratic form.- seventy nine. Positive-definite actual quadratic forms.- eighty. The prime minors theorem.- workouts on bankruptcy 10.- Appendix Mappings.- solutions to workouts.
Read or Download An Introduction to Linear Algebra PDF
Similar introduction books
Equipped thematically, this creation outlines the elemental ideas and strikes directly to study the tools and idea of CDA (critical discourse analysis). themes lined contain textual content and context, language and inequality, selection and backbone, historical past and method, ideology and identification. Jan Blommaert makes a speciality of how language can provide a very important figuring out of wider elements of strength kinfolk, arguing that CDA should still in particular study the results of energy.
This vintage textbook has been reprinted through The Institute of fabrics to supply undergraduates with a extensive review of metallurgy from atomic conception, thermodynamics, response kinetics, and crystal physics.
Sleek telecom networks are computerized, and are run by way of OSS software program or "operational aid systems”. those deal with smooth telecom networks and supply the knowledge that's wanted within the day by day operating of a telecom community. OSS software program is usually answerable for issuing instructions to the community infrastructure to turn on new carrier choices, begin companies for brand spanking new shoppers, and discover and proper community faults.
- Differential equations. An introduction to modern methods and applications
- Biomolecular Archaeology: An Introduction
- Credit Derivatives and Structured Credit: A Guide for Investors (The Wiley Finance Series)
- A Critical Introduction to the New Testament (Expanded Edition)
Extra info for An Introduction to Linear Algebra
1) = I. Hence in this case D is nonsingular and D - 1 = E. 2). The whole proposition is now proved. 4 Let A = [: :J be an arbitrary matrix in F 2 x 2. Then A is nonsingular if and only if ad-be"# 0; and if ad-be"# 0, A- 1 1 [d -bJ ---- - ad-be -e a· 43 MATRICES Proof Let k = ad - be, and let B = [ _ ~ - ~J By direct calculation, we find that AB = BA = kI. In the case k # 0, it follows that AC = CA = I, where C = (ljk)B, and hence that A is nonsingular with inverse (ljk)B. Consider the remaining case where k = and, therefore, AB = o.
In F(2n) x (2n)' M is the nonsingular matrix [ ; ~ partitioned after its nth row and nth column; and M - 1, similarly partitioned, is [~ ~ J Prove that C - BA is nonsingular and that its inverse is Z, and express W, X, Y in terms of A, B, C. 24. Let A and B be the matrices [~ IJ and [1 1 formulae for An and Bn. 1 OJ' respectively. Find general 010 003 CHAPTER THREE ELEMENTARY ROW OPERATIONS 22. Introduction The title of the chapter refers to operations of three standard types which, for various constructive purposes, we may carry out on the rows of a matrix.
E Solution. Call the given vectors x and y, and let be the angle between them. By the definition of x . y = Ixllyl cos e. 3,x. 4) Ixl = J2 and Iyl = J54 = 3)6. ji2 cos e= 6)3 cos e, and so cos e = 9/6)3 = )3/2. It follows that = n/6 (or 30°). 3 enables us to prove mechanically the following "distributive laws". 4 For all x,y,zEE3' (i) x. (y + z) = x. y + x . z and (ii) (y + z) . x = y . x + z . x. 5 For all x, y E E3 and AE IR, x. (AY) = (AX). y = A(X. y). 1. 4(i). Let x = (Xt,X 2,X3), Y = (Yt,Yl,Y3), z = (Zt>Z2,Z3). |
Become A Logical Mathematical Thinker
The main objective of this course is to empower students to with skills for proofs of propositions and theorems.
This course bridges the gap between introductory mathematics courses in algebra, linear algebra, calculus and advanced courses like mathematical analysis and abstract algebra.
Another objective is to pose interesting problems that require you to learn how to manipulate the fundamental objects of mathematics: sets, functions, sequences, and relations.
The topics discussed in this course are the following:
- mathematical puzzles
- propositional logic
- predicate logic
- elementary set theory
- elementary number theory
- principles of counting.
The most important aspect of this course is that you will learn what it means to prove a mathematical proposition. We accomplish this by putting you in an environment with mathematical objects whose structure is rich enough to have interesting propositions.
The environments we use are propositions and predicates, finite sets and relations, integers, fractions and rational numbers, and infinite sets.
Each topic in this course is standard except for the first one, puzzles. There are several reasons for including puzzles. First and foremost, a challenging puzzle can be a microcosm of mathematical development. A great puzzle is like a laboratory for proving propositions. The puzzler initially feels the tension that comes from not knowing how to start just as the mathematician feels when first investigating a topic or trying to solve a problem. The mathematician “plays” with the topic or problem, developing conjectures which he/she then tests in some special cases. Similarly, the puzzler “plays” with the puzzle. Sometimes the conjectures turn out to be provable, but often they do not, and the mathematician goes back to playing. At some stage, the puzzler (mathematician) develops sufficient sense of the structure and only then can he begin to build the solution (prove the theorem).
This multi-step process is perfectly mirrored in solving the KenKen problems this course presents. Some aspects of the solutions motivate ideas you will encounter later in the course. For example, modular congruence is a standard topic in number theory, and it is also useful in solving some KenKen problems. Another reason for including puzzles is to foster creativity.
Welcome to Become A Logical-Mathematical Thinker! Below, please find some general information on the course and its requirements.
Time Commitment: While learning styles can vary considerably and any particular student will take more or less time to learn or read, we estimate that the “average” student will take 112.25 hours to complete this course. We recommend that you work through the course at a pace that is comfortable for you and allows you to make regular (daily, or at least weekly) progress. It’s a good idea to also schedule your study time in advance and try as best as you can to stick to that schedule.
Tips/Suggestions: Learning new material can be challenging, so below we’ve compiled a few suggested study strategies to help you succeed.
Take notes on the various terms, practices, and theories as you read. This can help you differentiate and contextualize concepts and later provide you with a refresher as you study.
As you progress through the materials, take time to test yourself on what you have retained and how well you understand the concepts. The process of reflection is important for creating a memory of the materials you learn; it will increase the probability that you ultimately retain the information.
Upon successful completion of this course, you will be able to:
- Read and dissect proofs of elementary propositions related to discrete mathematical objects such as integers, finite sets, graphs and relations, and functions.
- Translate verbal statements into symbolic ones by using the elements of mathematical logic.
- Determine when a proposed mathematical argument is logically correct.
- Determine when a compound sentence is a tautology, a contradiction, or a contingency.
- Translate riddles and other brainteasers into the language of predicates and propositions.
- Solve problems related to place value, divisors, and remainders.
- Use modular arithmetic to solve various equations, including quadratic equations in Z6, Z7, Z11 and Diophantine equations.
- Prove and use the salient characteristics of the rational, irrational, and real number systems to verify properties of various number systems.
- Use mathematical induction to construct proofs of propositions about sets of positive integers.
- Classify relations as being reflexive, symmetric, antisymmetric, transitive, a partial ordering, a total ordering, or an equivalence relation.
- Determine if a relation is a function, and if so, whether or not it is a bijection.
- Manipulate finite and infinite sets by using functions and set operations.
- Determine if a set is finite, countable, or uncountable.
- Use the properties of countable and uncountable sets in various situations.
- Recognize some standard countable and uncountable sets.
- Determine and effectively use an appropriate counting tool to find the number of objects in a finite set.
Throughout this course, you’ll also see related learning outcomes identified in each unit. You can use the learning outcomes to help organize your learning and gauge your progress.
Lessons Sample lesson
- Place Value Notation
- Prime numbers
- An Infinitude of Primes
- Conjenctures about primes
- The Twin Prime Conjencture
- Goldbach's Conjecture
- The Riemann Hypothesis
- Fundamental Theorem of Arithmetic (FTA)
- Modular Arithmetic, the Algebra of Remainders
- Divisibility by 3, 9, and 11
- Building the Rings to Z6 and Z7
- The Floor or Integer Part Function
- Number Theory 1
- GCD and LCM
- Divisor Function
- Solving Ax + By = C
- Integer Divisibility |